Kubernetes configuration options

38 minute readTroubleshooting
On this page

How to configure CloudBees CD/RO for very large Helm deployments

A very large Helm deployment of CloudBees CD/RO is:

  • ~ 100K jobs per day

  • ~ 2000 running pipelines per day

  • ~ 5M job steps per day

Active customers and partners can refer to the CloudBees CD/RO Level 2: Helm for Very Large Kubernetes Deployments training course for details.

The following tables summarize the changes to the default Helm chart for very large deployments:

CloudBees CD/RO server values

For the default CloudBees CD/RO Helm charts, refer to CloudBees CD/RO server values. You can find these cloudbees-flow values referenced in the values.yaml under the Flow server configuration section.

Table 1. CloudBees CD/RO server parameter descriptions and values for very large Helm deployments
Name Value

server.replicas

4

server.ecconfigure

"--serverInitMemory=70 --serverMaxMemory=85"

server.resources.limits.cpu

16

server.resources.limits.memory

128Gi

server.resources.requests.cpu

4

server.resources.requests.memory

16Gi

CloudBees CD/RO web server values

For the default CloudBees CD/RO Helm charts, refer to CloudBees CD/RO web server values. You can find these cloudbees-flow values referenced in the values.yaml under the Flow web server configuration section.

Table 2. CloudBees CD/RO web server parameter descriptions and values for very large Helm deployments
Name Value

web.replicas

2

Repository values

For the default CloudBees CD/RO Helm charts, refer to Repository values. You can find these cloudbees-flow values referenced in the values.yaml under the Flow repository configuration section.

Table 3. Repository parameter descriptions and values for very large Helm deployments
Name Value

repository.resources.requests.memory

768Mi

CloudBees Analytics server values

For the default CloudBees CD/RO Helm charts, refer to CloudBees Analytics server values. You can find these cloudbees-flow values referenced in the values.yaml under the Flow DevOps Insight configuration section.

Table 4. CloudBees Analytics server parameter descriptions and values for very large Helm deployments
Name Value

dois.replicas

3

dois.esMinMasterNodes

2

dois.esRam

8192

dois.lsMaxRam

2048

dois.resources.limits.cpu

4

dois.resources.limits.memory

16Gi

dois.resources.requests.cpu

4

dois.resources.requests.memory

16Gi

CloudBees CD/RO agent values

For the default CloudBees CD/RO Helm charts, refer to CloudBees CD/RO agent configuration values You can find these cloudbees-flow-agent values referenced in the agent values.yaml.

Table 5. CloudBees CD/RO agent parameter descriptions and values for very large Helm deployments
Name Value

replicas

2

How to set global values in CloudBees CD/RO Helm charts

Starting with v2023.10.0, you can configure global values for image settings to use for all CloudBees CD/RO components. In v2023.10.0 and later cloudbees-flow values files, you can find these configurations by searching for Global configurations section.

When you set a global image, it is applied to the following components:

  • server

  • web

  • repository

  • dois

  • dois.backup

  • boundAgent

  • zookeeper

  • internalGatewayAgent

  • externalGatewayAgent

This option improves usability and enables you to set and update chart values for these components in a single location, instead of for each component individually. However, this setting is optional, and you can still configure CloudBees CD/RO components individually to pull specific images.

The following sections do not apply values from the global section and must be individually configured:

  • dois.sysctlInitContainer

  • dois.backup.cleanupImageRepository

  • dois.backup.cleanupImageTag

  • mariadb

  • ingress-nginx

  • cdsidecarinjector

  • helmTests

For information on setting custom images, refer to How to configure custom images for CloudBees CD/RO.

Setting global chart values

Setting global values for image settings is an optional configuration for CloudBees CD/RO v2023.10.0 and later, meant to improve usability and maintainability of your Helm charts. Global chart values allow you to set a value in one location that, where applicable, is applied to all CloudBees CD/RO components.

Global-level chart values override component-level chart values. This means, if you have both a global-level value set and component-level value set, the global value is taken.

Before setting any global-level value, CloudBees strongly suggests to ensure the global value is valid and correct for all instances where it will be applied.

The following is an example of setting a global image value:

  1. In your v2023.10.0 and later values file, navigate to the Global configurations section.

  2. Provide values for:

    • global.cloudbees.imageRegistry

    • global.cloudbees.imageTag

    • (OPTIONAL) global.cloudbees.imagePullSecrets

      You must enable imagePullSecrets if you are using a private registry. Secrets must be manually created or already exist in the namespace.

  3. (OPTIONAL) For CloudBees CD/RO component values that have a global.<value> set, remove the <component>.<value>.

    Global-level configurations override component-level configurations. However, from a troubleshooting point of view, having two different values configured in different locations in your values file may cause some confusion. Because of this, CloudBees suggests removing the component-level configuration for values already set at the global level.

You are now able to update the global image to use for all CloudBees CD/RO components in a single location in your values file.

How to configure custom images for CloudBees CD/RO

If you need to use custom images for individual components in your project, you can configure the image information in your myvalues.yaml. The following sections describe how to set custom images for:

If you set a values.global.cloudbees image and a component-level image for bound agents in either the default CloudBees CD/RO values file or your myvalues.yaml, the values.global.cloudbees image value overrides the component-level image values.

Before configuring custom images for bound agents, ensure you do not have images configured for values.global.cloudbees within your values files.

Configure custom bound agent images

This section is intended for use with v2023.10.0 and later Helm charts. For CloudBees CD/RO Helm charts v2023.08.0 and earlier, the boundAgent included image configuration fields in the Helm chart by default.
Add a custom boundAgent image to Helm chart

In CloudBees CD/RO v2023.10.0 and above Helm charts, global image configurations were implemented in the cloudbees-flow values file, values.global.cloudbees. As part of this change, bound agents were reconfigured to use the cloudbees-flow-agent.values.image values. Additionally, the boundAgent.imageRegistry values were removed from the cloudbees-flow values file.

If your project uses custom images for bound agents, you must make the following changes to CloudBees CD/RO v2023.10.0 and above Helm charts before starting the upgrade:

  1. In your cloudbees-flow myvalues.yaml, ensure no values.global.cloudbees image value is set.

    If you set both a values.global.cloudbees image value and any component-level image values in your myvalues.yaml, the values.global.cloudbees image value overrides all component-level image values.
  2. In the boundAgent section of your cloudbees-flow values file, add the following chart configuration:

    images: ## Image registry to pull CloudBees CD/RO boundAgent images from. ## Example: registry: "123456789012.dkr.ecr.us-east-1.amazonaws.com" registry: "<YOUR-IMAGE-REGISTRY>" ## The `imageRepository` in the `images.registry` to pull the bound agent image from. imageRepository: "<YOUR-IMAGE-REPO>" ## CloudBees CD/RO bound agent image tag to pull. tag: "<YOUR-IMAGE-TAG>"
    • For example, if your project uses an internal registry called internal.registry.example/cloudbees::

      images: ## Image registry to pull CloudBees CD/RO boundAgent images from. ## Example: registry: "123456789012.dkr.ecr.us-east-1.amazonaws.com" registry: "internal.registry.example/cloudbees" ## The `imageRepository` in the `images.registry` to pull the bound agent image from. imageRepository: "cbflow-boundagent" ## CloudBees CD/RO bound agent image tag to pull. tag: "1.0"

You can now upgrade to CloudBees CD/RO v2023.10.0 and above using your custom bound agent image.

Configure custom ingress-nginx images

Add a custom ingress-nginx image to Helm chart

If your project uses custom images for ingress-nginx, you must make the following image changes to the CloudBees CD/RO Helm charts in your myvalues.yaml before starting the upgrade.

In the ingress-nginx section of your cloudbees-flow values file, add the following chart configuration:

controller: image: registry: <YOUR-IMAGE-REGISTRY> image: <YOUR-IMAGE> ## For backward compatibility, consider setting the full image URL via the repository value below. ## Use *either* the current default registry/image or the repository format. Otherwise, installing the chart by providing the `values.yaml` will fail. # repository: tag: "<YOUR-IMAGE-TAG>" digest: <YOUR-DIGEST-HASH> admissionWebhooks: patch: registry: <YOUR-IMAGE-REGISTRY> image: <YOUR-IMAGE> ## For backward compatibility, consider setting the full image URL via the repository value below. ## Use *either* the current default registry/image or the repository format. Otherwise, installing the chart by providing the `values.yaml` will fail. # repository: tag: "<YOUR-IMAGE-TAG>" digest: <YOUR-DIGEST-HASH> ## Set images if enabled defaultBackend: enabled: false image: registry: <YOUR-IMAGE-REGISTRY> image: <YOUR-IMAGE> ## For backward compatibility, consider setting the full image URL via the repository value below. ## Use *either* the current default registry/image or the repository format or installing the chart by providing the `values.yaml` will fail. # repository: tag: "<YOUR-IMAGE-TAG>"

For example, if your project uses an internal registry called internal.registry.example/ingress-nginx:

controller: image: registry: internal.registry.example/ingress-nginx image: controller ## For backward compatibility, consider setting the full image URL via the repository value below. ## Use *either* the current default registry/image or the repository format or installing the chart by providing the `values.yaml` will fail. # repository: tag: "v1.1.0" digest: sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a admissionWebhooks: patch: registry: internal.registry.example/ingress-nginx image: kube-webhook-certgen ## For backward compatibility, consider setting the full image URL via the repository value below. ## Use *either* the current default registry/image or the repository format or installing the chart by providing the `values.yaml` will fail. # repository: tag: "v1.1.1" digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 ## Set images if enabled defaultBackend: enabled: false image: registry: internal.registry.example/ingress-nginx image: defaultbackend-amd64 ## For backward compatibility, consider setting the full image URL via the repository value below. ## Use *either* the current default registry/image or the repository format or installing the chart by providing the `values.yaml` will fail. # repository: tag: "1.5"

You can now upgrade CloudBees CD/RO to use your custom ingress-nginx image.

Configure custom CloudBees sidecar injector images

Add a custom CloudBees sidecar injector image to Helm chart

If your project uses custom images for a Cloudbees sidecar injector, you must make the following images changes to the CloudBees CD/RO Helm charts in your myvalues.yaml before starting the upgrade.

In the cloudbees sidecar injector (cdsidecarinjector) section of your cloudbees-flow values file, add the following chart configuration:

enabled=true hub=<YOUR-IMAGE-REGISTRY> image=<YOUR-IMAGE> tag="<YOUR-IMAGE-TAG>" requestCert: image=<YOUR-IMAGE> tag="<YOUR-IMAGE-TAG>"
  • For example, if your project uses an internal registry called internal.registry.example/cloudbees:

    enabled=true hub=internal.registry.example/cloudbees image=test-sidecar-injector tag="2.2.1" requestCert: image=test-cert-requester tag="2.2.1"

You can now upgrade CloudBees CD/RO to use your custom cloudbees-sidecar-injector image.

Configure custom MariaDB images

Add a custom MariaDB image to Helm chart

If your project uses custom images for MariaDB, you must make the following images changes to the CloudBees CD/RO Helm charts in your myvalues.yaml before starting the upgrade.

In the mariadb section of your cloudbees-flow values file, add the following chart configuration:

image: registry: <YOUR-IMAGE-REGISTRY> repository: <YOUR-IMAGE-REPO> tag: <YOUR-IMAGE-TAG>

For example, if your project uses an internal registry called internal.registry.example/cloudbees:

image: registry: internal.registry.example/cloudbees repository: mariadb tag: 10.3.20-debian-9-r0

You can now upgrade CloudBees CD/RO to use your custom MariaDB image.

Configure custom ZooKeeper images

Add a custom ZooKeeper image to Helm chart with a global image set

If your project uses custom images for ZooKeeper, and has global values set, you must make the following images changes to the CloudBees CD/RO Helm charts in your myvalues.yaml before starting the upgrade:

  1. In your cloudbees-flow myvalues.yaml, ensure the values.global.cloudbees.imageRegistry and values.global.cloudbees.imageTag image values are set.

    Ensure your ZooKeeper image is located in the values.global.cloudbees.imageRegistry, and it set with the values.global.cloudbees.imageTag. If it is not, your ZooKeeper image will fail to load. If your project is not using these values, refer to Add a custom ZooKeeper image to Helm chart with no global image set.
  2. In the zookeeper section of your cloudbees-flow values file, add the following chart configuration:

    image: imageRepository: <YOUR-IMAGE-REPO>
    • For example, if your project uses a global registry called global-images-registry.example with image tags set to global-image-tag, and your ZooKeeper image was in the zookeeper-image repository, your values file should have the following configurations:

      global: cloudbees: imageRegistry: global-images-registry.example imageTag: global-image-tag .... zookeeper: image: imageRepository: zookeeper-image

You can now upgrade CloudBees CD/RO to use your custom ZooKeeper image.

Add a custom ZooKeeper image to Helm chart with no global image set

If your project uses custom images for ZooKeeper, and has no global image values set, you must make the following images changes to the CloudBees CD/RO Helm charts in your myvalues.yaml before starting the upgrade:

  1. In your cloudbees-flow myvalues.yaml, ensure no values.global.cloudbees image value is set.

    If you have values.global.cloudbees.imageRegistry and values.global.cloudbees.imageTag, your ZooKeeper image will fail to load using the following configuration. If you are using these values, refer to Add a custom ZooKeeper image to Helm chart with a global image set.
  2. In the zookeeper section of your cloudbees-flow values file, add the following chart configuration:

    image: repository: <YOUR-IMAGE-REGISTRY>/<YOUR-IMAGE-REPO> tag: "<YOUR-IMAGE-TAG>"
    • For example, if your project uses an internal registry called internal.registry.example/ with image repository called zookeeper-image:

      image: ## Container repository to pull ZooKeeper image from. repository: internal.registry.example/zookeeper-image ## Zookeeper image tag to pull. tag: "2023.08.0.167214_3.2.51_20230809"

You can now upgrade CloudBees CD/RO to use your custom ZooKeeper image.

How to configure load balancing with ingress

For the default CloudBees CD/RO Helm charts, refer to Ingress values. You can find these cloudbees-flow values referenced in the values.yaml under the Flow ingress configuration section.

An ingress controller typically does not eliminate the need for an external load balancer but adds a layer of routing and control behind the load balancer. However, you can configure the ingress controller so all service endpoints, such as web, server, and repository, may be exposed from the same domain name and load balancer endpoint.

CloudBees recommends configuring your ingress controller so all CloudBees CD/RO services can be exposed through a single load balancer.

By default, ingress is enabled in the CloudBees CD/RO chart. The following is a summary of the settings:

To run CloudBees CD/RO with Kubernetes 1.22 and later, you must use the ingress-nginx controller with the following required settings:

  • ingress-nginx.enabled=true

  • ingress.class=nginx

  • nginx-ingress.enabled=false

Kubernetes versions 1.21 and earlier

nginx-ingress.controller.ingressClass

Default: flow-ingress

nginx-ingress.controller.publishService.enabled

Default: true

nginx-ingress.controller.scope.enabled

Default: true

nginx-ingress.enabled

Default: true

nginx-ingress.tcp.61613

CloudBees CD/RO server

Default: {{ .Release.Namespace }}/flow-server:61613

nginx-ingress.tcp.8200

CloudBees CD/RO repository

Default: {{ .Release.Namespace }}/flow-repository:8200

nginx-ingress.tcp.8443

CloudBees CD/RO web server

Default: {{ .Release.Namespace }}/flow-server:8443

nginx-ingress.tcp.9200

CloudBees Analytics Elasticsearch database

Default: {{ .Release.Namespace }}/flow-devopsinsight:9200

nginx-ingress.tcp.9500

CloudBees Analytics server

Default: {{ .Release.Namespace }}/flow-devopsinsight:9500

Kubernetes versions 1.22 and later

ingress-nginx.controller.ingressClass

Default: flow-ingress

ingress-nginx.controller.publishService.enabled

Default: true

ingress-nginx.controller.scope.enabled

Default: true

ingress-nginx.enabled

Default: true

ingress-nginx.tcp.61613

CloudBees CD/RO server

Default: {{ .Release.Namespace }}/flow-server:61613

ingress-nginx.tcp.8200

CloudBees CD/RO repository

Default: {{ .Release.Namespace }}/flow-repository:8200

ingress-nginx.tcp.8443

CloudBees CD/RO web server

Default: {{ .Release.Namespace }}/flow-server:8443

ingress-nginx.tcp.9200

CloudBees Analytics Elasticsearch database

Default: {{ .Release.Namespace }}/flow-devopsinsight:9200

ingress-nginx.tcp.9500

CloudBees Analytics server

Default: {{ .Release.Namespace }}/flow-devopsinsight:9500

How to install CloudBees CD/RO on Kubernetes using an existing database

You can install CloudBees CD/RO using your existing database, if you are:

  • Performing an initial installation of CloudBees CD/RO.

  • Migrating from a traditional CloudBees CD/RO installation to Kubernetes.

  • Spinning up an additional testing or production environment.

  • Upgrading between CloudBees CD/RO versions.

The procedures to install your existing database with CloudBees CD/RO are nearly the same for each of these scenarios. However, if this is your initial installation, it may be easier to install CloudBees CD/RO using the instructions found in Install CloudBees CD/RO production server, and then update your installation with the steps listed here.

If you only want to find the values needed to configure your existing database as part of the Helm chart customization process, you can also refer to Database values.

Before you start

  1. Ensure the CloudBees CD/RO version you are installing or upgrading to supports the database you want to integrate. For more information on which databases are supported by each CloudBees CD/RO release, refer to Supported platforms for CloudBees CD/RO on Kubernetes.

  2. Always make backups of your project-specific resources prior to installing or upgrading your CloudBees CD/RO version. These backups may be vital to recovering your data if critical errors occur during upgrades.

  3. Always perform adequate and thorough testing in a development environment before releasing any version installation or upgrade into your production environment.

  4. To see an example of a complete CloudBees CD/RO installation (not upgrade) using an existing database, refer to Example installation of CloudBees CD/RO using an existing database.

If you do not already have a myvalues.yaml, refer to CloudBees CD/RO default Helm charts to set yours up before continuing these steps.

Installing CloudBees CD/RO with your existing database

To install your existing database:

  1. Make backups of your deployment and database. These backups may be necessary to recover data if critical errors occurs during the upgrade process.

  2. Set the variables for the current release’s name, its namespace, and version you want to install/upgrade to:

    releaseName="<your-current-release>" namespace="<your-current-release-namespace>" # Example upgradeVersion value: "2.21.0" upgradeVersion="<version-of-upgrade-release>"
  3. Save the values from your current release to a values file by running:

    helm get values $releaseName --namespace $namespace > old-values.yaml
    If this is your initial installation, this step is not needed, you can continue and use the same steps for your myvalues.yaml as described for old-values.yaml.
  4. Ensure your old-values.yaml contains values, or set the values, for the following parameters:

    • database.externalEndpoint

    • database.dbPort

    • database.dbType

    • database.dbName

    • database.dbUser

    • database.dbPassword

    • flowLicense

    • flowCredentials.adminPassword

      When using the helm install command, you can use the --set option to specify these values.
  5. If your old-values.yaml uses the images.tag, you must update it with the image tag of the new CloudBees CD/RO release you want to upgrade to.

    If this is your initial installation, and you downloaded the latest version of the values file from SonaType Nexus, your values file already contains this information unless you removed it during customization.
    If you do not update the images.tag or update it with an incorrect value, your installation/upgrade will not succeed.

After you have performed these steps, set any additional values you need in your old-values.yaml for the CloudBees CD/RO installation. For more information refer to Configure Helm charts.

Once you have your values file ready for installation, ensure the network connection between your database instance and Kubernetes cluster is active before continuing.

Next, you need your flow-server passkey and keystore files from your existing CloudBees CD/RO installation and to convert them to base64 format:

  1. Copy and save your flow-server passkey:

    # Get the flow-server pod name cdServerPod=$(kubectl get pod -l app=flow-server -o \ jsonpath='{.items[*].metadata.name}' -n $namespace); echo $cdServerPod # Get the passkey file name passkey=$(kubectl -n $namespace exec $cdServerPod -- ls /tmp/ | grep passkey); echo $passkey # Copy the passkey file from the flow-server pod to the local machine kubectl -n $namespace cp $cdServerPod:/tmp/$passkey ./passkey # Ensure the passkey file has been copied ls -l | grep passkey
  2. Convert your passkey to base64 format:

    base64 -i passkey -o passkey.b64
  3. Copy and store the flow-server keystore files:

    # Get the keystore file name keystore=$(kubectl -n $namespace exec $cdServerPod -- ls /tmp/ | grep keystore); echo $keystore # Copy the keystore file from flow-server pod to your local machine kubectl -n $namespace cp $cdServerPod:/tmp/$keystore ./keystore # Ensure the keystore file has been copied ls -l | grep keystore
  4. Convert your keystore to base64 format:

    base64 -i keystore -o keystore.b64
  5. Install CloudBees CD/RO using your old-values.yaml, passkey, keystore, and license.xml (if required):

    helm install $releaseName cloudbees/cloudbees-flow --namespace $namespace \ --values old-values.yaml \ --set-file server.customConfig.passkey\\.b64=passkey.b64 \ --set-file server.customConfig.keystore\\.b64=keystore.b64 \ # Helm upgrades exit when all tasks are complete, regardless of timeout. # However, if your database is very large, the timeout may need to be increased to prevent returning a FAILED status. # For instance: --timeout 10800s --timeout 4200s \ # If required include your license.xml --set-file flowLicense.licenseData=license.xml
    If you are installing a CloudBees CD/RO instance with an external database, do not use the --wait option. The flow-server-init-job cannot be started if --wait is used.

Example installation of CloudBees CD/RO using an existing database

For your reference, the following shows a complete installation example of CloudBees CD/RO using an existing database:

Example CloudBees CD/RO installation using existing database
dbType=sqlserver dbEndpoint=mssql.k8s.xyz dbPort=1433 dbUser=dbadmin dbPass=mySecretDbPassword dbName=myexistingdb flowPassword=mySecretFlowPassword doisPassword=mySecretDoisPassword licenseFile=path/to/license.xml passkeyFile=path/to/passkey keystoreFile=path/to/keystore storageClass=nfs-client hostname=flow-server.k8s.xyz namespace=test-existing-db release=cloudbees-flow releaseVersion=2.21 base64 -i $passkeyFile -o passkey.b64 base64 -i $keystoreFile -o keystore.b64 kubectl create namespace $namespace kubectl -n $namespace create secret \ generic $release-cloudbees-flow-credentials \ --from-literal CBF_SERVER_ADMIN_PASSWORD=$flowPassword kubectl -n $namespace create secret \ generic $release-cloudbees-flow-dois \ --from-literal CBF_DOIS_ADMIN_PASSWORD=$flowPassword \ --from-literal CBF_DOIS_PASSWORD=$doisPassword helm repo add cloudbees https://charts.cloudbees.com/public/cloudbees helm repo update helm search repo cloudbees/cloudbees-flow --versions helm install $release cloudbees/cloudbees-flow --version $releaseVersion \ --namespace $namespace \ --set storage.volumes.serverPlugins.storageClass=$storageClass \ --set storage.volumes.repositoryStorage.storageClass=$storageClass \ --set storage.volumes.doisStorage.storageClass=$storageClass \ --set database.externalEndpoint=$dbEndpoint \ --set database.dbPort=$dbPort \ --set database.dbType=$dbType \ --set database.dbName=$dbName \ --set database.dbUser=$dbUser \ --set database.dbPassword=$dbPass \ --set flowCredentials.existingSecret=$release-cloudbees-flow-credentials \ --set dois.credentials.existingSecret=$release-cloudbees-flow-dois \ --set serverName=$hostname \ --set ingress.host=$hostname \ --set ingress.class=$release-nginx \ --set ingress-nginx.controller.ingressClassResource.name=$release-nginx \ --set ingress-nginx.controller.ingressClassResource.enabled=true \ --set ingress-nginx.controller.ingressClassResource.default=false \ --set ingress-nginx.controller.ingressClassResource.controllerValue="k8s.io/$release-ingress-nginx" \ --set-file flowLicense.licenseData=$licenseFile \ --set-file server.customConfig.passkey\\.b64=passkey.b64 \ --set-file server.customConfig.keystore\\.b64=keystore.b64 \ --timeout 4200s

How to install custom ingress controllers with CloudBees CD/RO

To configure CloudBees CD/RO to use your custom or existing ingress controller:

  1. Configure your ingress controller manifest to expose the following required TCP ports:

    1. TCP port 8200 is required by flow-repository.

    2. TCP ports 8443 and 61613 are required by flow-server.

    3. (Optional) If you are using CloudBees Analytics, TCP ports 9200 and 9500 are required by flow-devopsinsight.

      If you need help configuring the ingress controller manifest, refer to Exposing TCP and UDP services in the NGINX Ingress Controller documentation. Ensure you follow the example for TCP/UDP proxy support.
  2. Update your myvalues.yaml with your custom ingress controller values:

    ingress: enabled: true host: <your-host> annotations: <your-annotations> class: <your-custom-ingress-class>
    For a reference, refer to CloudBees CD/RO Helm chart ingress example.
  3. In your myvalues.yaml, ensure ingress-nginx.enabled is configured as false for your installation.

    ingress-nginx: enabled: false
    If you do not have the ingress-nginx.enabled entry in your myvalues.yaml, by default, it is set to false in the cloudbees-flow values file.
  4. If you have not already, apply your ingress controller manifest to your Kubernetes CloudBees CD/RO cluster.

  5. To apply the custom ingress controller configuration, update your CloudBees CD/RO installation:

    helm upgrade <server-releasename> cloudbees/cloudbees-flow \ -f <myvalues.yaml> \ --namespace <server-namespace> \ --timeout 10000s

How to override CloudBees CD/RO Kubernetes network policies

As of v2023.04.0, you can override network polices for CloudBees CD/RO on Kubernetes by adding custom network polices and ingress controllers to override the default network policies. This is useful to deploy custom ingress controllers for internal networking for CloudBees CD/RO components or agents.

CloudBees strongly suggests testing all network policy changes in a development environment before releasing them into your production environment.

Before you start

Before overriding the default CloudBees CD/RO network policies:

  • You must have a NetworkPolicy manifest(s) developed that describes network behavior for all CloudBees CD/RO components.

    Example NetworkPolicy manifest
    apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: annotations: meta.helm.sh/release-name: cbflow meta.helm.sh/release-namespace: cbflow labels: app.kubernetes.io/managed-by: Helm name: web-policy namespace: cbflow spec: ingress: - from: - ipBlock: cidr: 0.0.0.0/0 - podSelector: matchLabels: app: flow-bound-agent release: cbflow - podSelector: matchLabels: app: flow-server release: cbflow - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: app.kubernetes.io/name: ingress-nginx ports: - port: 2080 protocol: TCP - port: 2443 protocol: TCP podSelector: matchLabels: app: flow-web release: cbflow policyTypes: - Ingress
  • Your NetworkPolicy manifest(s) must be present in your cluster. To add manifest(s) to your cluster, run:

    kubectl apply -f <your-NetworkPolicy-manifest.yaml> \ --namespace <development-server-namespace>
    If you have more than one manifest, use additional instances of -f to include them in the previous command.
  • If your myvalues.yaml is based on the cloudbees-flow values file v2.23.1 or earlier, you need to update the Network isolation configuration section with new configurations. To do so:

    1. Go to Sonatype Nexus and download the latest cloudbees-flow release.

    2. Extract the package, and open the cloudbees-flow/values.yaml.

    3. Find the Network isolation configuration section in the values.yaml and copy it to your myvalues.yaml. Ensure the networkIsolation.networkPolicies configuration is present.

      When updating your myvalues.yaml with the latest Network isolation configuration section, ensure:

      • You do not overwrite any custom networkIsolation.* settings.

      • Your values file is valid and the indention is correct.

      Failing to do so may cause your upgrade to fail or other unexpected behavior.

Override CloudBees CD/RO Kubernetes network policies

After you’ve met the requirements in Before you start, in your myvalues.yaml:

  1. Navigate to the Network isolation configuration section.

  2. To disable the default CloudBees CD/RO network, configure networkIsolation.networkPolicies.enable: false.

    After configuring networkIsolation.networkPolicies.enable: false, CloudBees CD/RO will not create its default network policies, and depend completely on your custom policies. Failure to correctly configure your custom network policies may cause unexpected behavior in your deployment.
  3. In the additionalIngressSelector for each CloudBees CD/RO component, provide the network policy configurations for your ingress.

    Example custom cbflow-web ingress configuration
    web: additionalIngressSelector: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: app.kubernetes.io/name: ingress-nginx

Once you’ve updated your myvalues.yaml, update your flow-server deployment:

helm upgrade <development-server-releasename> cloudbees/cloudbees-flow \ -f <myvalues.yaml> \ --namespace <development-server-namespace> \ --timeout 10000s

You can now test your custom network policies in your development environment. After testing is complete, to updated your production servers, refer to Update CloudBees CD/RO production servers.

How to install custom CloudBees Analytics init container images

CloudBees CD/RO supports integrations using custom CloudBees Analytics init container images. This section guides you though how to:

  1. Update the CloudBees Analytics Kubernetes template for custom images.

    If you are using CloudBees CD/RO v2023.08.0 or earlier, this step is required. If you are using CloudBees CD/RO v2023.10.0 and later, the CloudBees Analytics Kubernetes template includes this feature, and you should start with Add custom CloudBees Analytics images to your values file.
  2. Add custom CloudBees Analytics images to your values file.

Update the CloudBees Analytics Kubernetes template for custom images

This task is only needed if you are using CloudBees CD/RO v2023.08.0 or earlier.

To include custom CloudBees Analytics init container images with CloudBees CD/RO installations or upgrades, you must update the charts/cloudbees-flow/templates/dois-stateful-set.yaml in your CloudBees CD/RO v2023.08.0 or earlier Helm charts. To do so:

  1. Navigate to and open charts/cloudbees-flow/templates/dois-stateful-set.yaml.

  2. In the dois-stateful-set.yaml, navigate to spec.initContainers.image.

  3. Update the image key-value with:

    image: {{ .Values.dois.sysctlInitContainer.imageRepository }}:{{ .Values.dois.sysctlInitContainer.imageTag }}
  4. Ensure the YAML is well-formed and save.

After performing these steps, the template pulls the CloudBees Analytics init container image configured in your values file based on the values of dois.sysctlInitContainer.imageRepository and dois.sysctlInitContainer.imageTag.

Add custom CloudBees Analytics images to your values file

If you are using CloudBees CD/RO v2023.08.0 or earlier, ensure you perform the steps in Update the CloudBees Analytics Kubernetes template for custom images before continuing.

In your values file, the dois.sysctlInitContainer.imageRepository and dois.sysctlInitContainer.imageTag values are used to configure the CloudBees Analytics image to include during installations and upgrades. To set these values in your values file:

For CloudBees CD/RO v2023.08.0 or earlier:
  1. Open your values file and search for dois:. Ensure this brings you to the CloudBees Analytics configuration section of the values file.

  2. In the CloudBees Analytics section of your values file, add the following:

    ## Enable or disable a *privileged* `sysctlInitContainer` if ## the sysctl `vm.max_map_count setting` is set by another method. sysctlInitContainer: enabled: true ## The `sysctlInitContainer.imageRepository` sysctlInitContainer image repository. imageRepository: busybox ## The `sysctlInitContainer.imageTag` sysctlInitContainer image tag . imageTag: stable
  3. Replace the imageRepository value with your init container image repository.

  4. Replace imageTag value with your init container image tag.

  5. Ensure the YAML is well-formed and save.

Now that you have updated your values file with your custom CloudBees Analytics init container image, update your CloudBees CD/RO environment to apply your image. For more information, refer to Update CloudBees CD/RO production servers.

Always ensure you perform sufficient testing in your development environment before applying your changes to production environments.
For CloudBees CD/RO v2023.10.0 or later:
  1. Open your values file and navigate to dois.sysctlInitContainer.

  2. Replace the imageRepository value with your init container image repository.

  3. Replace imageTag value with your init container image tag.

  4. Ensure the YAML is well-formed and save.

Now that you have updated your values file with your custom CloudBees Analytics init container image, update your CloudBees CD/RO environment to apply your image. For more information, refer to Update CloudBees CD/RO production servers.

Always ensure you perform sufficient testing in your development environment before applying your changes to production environments.

How to configure custom probing values

In CloudBees CD/RO v2023.12.0 and later Helm charts, liveness and readiness probe values were added for:

To override the defaults values in the CloudBees CD/RO Helm charts, follow the instructions below to configure custom values in your myvalues.yaml.

Configure probing values for CloudBees CD/RO server jobInit

Add a custom jobInit probe values

To configure custom jobInit.livenessProbe values:

  1. In your myvalues.yaml, navigate to values.jobInit.

  2. Add the following fields under jobInit:

    ## Kubernetes Liveness Probes: livenessProbe: initialDelaySeconds: <INITIAL-DELAY-IN-SECONDS> periodSeconds: <PROBE-PERIOD-IN-SECONDS> timeoutSeconds: <TIMEOUT-LENGTH-IN-SECONDS>
    • The default values are:

      ## Kubernetes Liveness Probes: livenessProbe: initialDelaySeconds: 60 periodSeconds: 60 timeoutSeconds: 10
  3. Install or upgrade your CloudBees CD/RO instance to apply these values.

Configure probing values for CloudBees CD/RO web server

Add a custom web probe values

To configure custom web.livenessProbe or web.readinessProbe values:

  1. In your myvalues.yaml, navigate to values.web.

  2. Add the following applicable fields under web:

    web.livenessProbe
    web.readinessProbe
    ## Kubernetes Liveness Probes: livenessProbe: initialDelaySeconds: <INITIAL-DELAY-IN-SECONDS> periodSeconds: <PROBE-PERIOD-IN-SECONDS> timeoutSeconds: <TIMEOUT-LENGTH-IN-SECONDS>
    ## Kubernetes Readiness Probes: readinessProbe: initialDelaySeconds: <INITIAL-DELAY-IN-SECONDS> periodSeconds: <PROBE-PERIOD-IN-SECONDS> timeoutSeconds: <TIMEOUT-LENGTH-IN-SECONDS> failureThreshold: <FAILURE-THRESHOLD-IN-SECONDS>
    • The default values are:

      web.livenessProbe defaults
      web.readinessProbe defaults
      ## Kubernetes Liveness Probes: livenessProbe: initialDelaySeconds: 10 periodSeconds: 60 timeoutSeconds: 10 failureThreshold: 3
      ## Kubernetes Readiness Probes: readinessProbe: initialDelaySeconds: 10 periodSeconds: 5 timeoutSeconds: 10 failureThreshold: 3
  3. Install or upgrade your CloudBees CD/RO instance to apply these values.

Configure probing values for CloudBees CD/RO repository server

Add a custom repository probe values

To configure custom repository.livenessProbe or repository.readinessProbe values:

  1. In your myvalues.yaml, navigate to values.repository.

  2. Add the following fields applicable under repository:

    repository.livenessProbe
    repository.readinessProbe
    ## Kubernetes Liveness Probes: livenessProbe: initialDelaySeconds: <INITIAL-DELAY-IN-SECONDS> periodSeconds: <PROBE-PERIOD-IN-SECONDS> timeoutSeconds: <TIMEOUT-LENGTH-IN-SECONDS> failureThreshold: <FAILURE-THRESHOLD-IN-SECONDS>
    ## Kubernetes Readiness Probes: readinessProbe: initialDelaySeconds: <INITIAL-DELAY-IN-SECONDS> periodSeconds: <PROBE-PERIOD-IN-SECONDS> timeoutSeconds: <TIMEOUT-LENGTH-IN-SECONDS> failureThreshold: <FAILURE-THRESHOLD-IN-SECONDS>
    • The default values are:

      repository.livenessProbe defaults
      repository.readinessProbe defaults
      ## Kubernetes Liveness Probes: livenessProbe: initialDelaySeconds: 120 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3
      ## Kubernetes Readiness Probes: readinessProbe: initialDelaySeconds: 120 periodSeconds: 5 timeoutSeconds: 5 failureThreshold: 3
  3. Install or upgrade your CloudBees CD/RO instance to apply these values.

How to autoscale server pods

A HorizontalPodAutoscaler (HPA) automatically updates a workload resource to scale the workload to match demand. HPA deploys additional pods in response to an increased load.

For more information, refer to Horizontal Pod Autoscaling.

CloudBees CD/RO includes horizontal pod autoscaling support for the following deployment components:

  • CloudBees CD/RO server

  • Web server

  • Repository server

CloudBees CD/RO server

The CloudBees CD/RO server supports HPA only when clusteredMode is true.

To enable HPA for the CloudBees CD/RO server, add the following parameter values:

server: autoscaling: enabled: true # enable: true to enable HPA for server minReplicas: 1 # Min Number of Replicas maxReplicas: 3 # Max Number of Replicas to scale targetCPUUtilizationPercentage: 80 # CPU Threshold to scale up targetMemoryUtilizationPercentage: 80 # Memory Threshold to scale up templates: [] # Custom or additional autoscaling metrics # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics # - type: Pods # pods: # metric: # name: repository_process_requests_total # target: # type: AverageValue # averageValue: 10000m
server.autoscaling.minReplicas must match server.replicas.

Web server

The web server supports scaling in both cluster and non-cluster modes.

To enable HPA for the web server, add the following parameter values:

web: autoscaling: enabled: true # enable: true to enable HPA for web minReplicas: 1 # Min Number of Replicas maxReplicas: 3 # Max Number of Replicas to scale targetCPUUtilizationPercentage: 80 # CPU Threshold to scale up targetMemoryUtilizationPercentage: 80 # Memory Threshold to scale up templates: [] # Custom or additional autoscaling metrics # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics # - type: Pods # pods: # metric: # name: repository_process_requests_total # target: # type: AverageValue # averageValue: 10000m
web.autoscaling.minReplicas must match web.replicas.

Repository server

The repository server supports scaling in both cluster and non-cluster modes.

To enable HPA for the repository server, add the following parameter values:

repository: autoscaling: enabled: true # enable: true to enable HPA for repository minReplicas: 1 # Min Number of Replicas maxReplicas: 3 # Max Number of Replicas to scale targetCPUUtilizationPercentage: 80 # CPU Threshold to scale up targetMemoryUtilizationPercentage: 80 # Memory Threshold to scale up templates: [] # Custom or additional autoscaling metrics # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics # - type: Pods # pods: # metric: # name: repository_process_requests_total # target: # type: AverageValue # averageValue: 10000m
repository.autoscaling.minReplicas must match repository.replicas.

How to use an existing secret for bound agents

If you are using an existing secret in your values file for flowCredentials.existingSecret, you may have gotten the following error message when trying to install the bound agent (cbflow-agent):

To use flowCredentials.existingSecret, please set \ boundAgent.flowCredentials.existingSecret to the same value \ as flowCredentials.existingSecret in your values file.

This is because CloudBees CD/RO expects if you are using an existing secret in flowCredentials.existingSecret, the same existingSecret is used for the bound agent.

To use your existing CloudBees CD/RO flow-server secret for the bound agent (cbflow-agent):

  1. Open your myvalues.yaml, and search for boundAgent:.

  2. In the boundAgent configuration, add the following:

    flowCredentials: existingSecret: <your-flowCredentials.existingSecret-value>
  3. Ensure the entries are correctly indented, and save your file.

On your next install or upgrade attempt, this should remove the boundAgent.flowCredentials.existingSecret error message.

How to add additional container values for sidecar injectors

For the default CloudBees CD/RO Helm charts, refer to cloudbees-flow chart configuration values for each component. You can find these cloudbees-flow values referenced in the values.yaml under the *.additionalContainers tag for the corresponding component.

Table 6. Sidecar injector additional container descriptions
Key Description/Default

server.additionalContainers

To add additional containers for the server, uncomment the name, image, and command in your values file.

server: additionalContainers: # additionalContainers: # - name: container-name # image: image:version # command: # - "/container-command"

web.additionalContainers

To add additional containers, uncomment the name, image, and command in your values file.

web: additionalContainers: # additionalContainers: # - name: container-name # image: image:version # command: # - "/container-command"

dois.additionalContainers

To add additional containers, uncomment the name, image, and command in your values file.

dois: additionalContainers: # additionalContainers: # - name: container-name # image: image:version # command: # - "/container-command"

repository.additionalContainers

To add additional containers, uncomment the name, image, and command in your values file.

repository: additionalContainers: # additionalContainers: # - name: container-name # image: image:version # command: # - "/container-command"

boundAgent.additionalContainers

To add additional containers, uncomment the name, image, and command in your values file.

boundAgent: additionalContainers: # additionalContainers: # - name: container-name # image: image:version # command: # - "/container-command"

How to add additional volume values for sidecar injectors

For the default CloudBees CD/RO Helm charts, refer to cloudbees-flow chart configuration values for each component. You can find these cloudbees-flow values referenced in the values.yaml under the *.additionalVolume and *.additionalVolumeMounts tags for the corresponding component.

Table 7. Sidecar injector additional volume parameter descriptions and default values
Key Description/Default

server.additionalVolume

server.additionalVolumeMounts

You can use ConfigMaps, Secrets, and PersistentVolumeClaims as additional volumes. The example below uses PersistentVolumeClaims. To add an additional volume and mount for the server, update the settings in the myvalues.yaml file.

server: additionalVolumes: # - name: volume0 # persistentVolumeClaim: # claimName: volume0 additionalVolumeMounts: # - name: volume0 # mountPath: /tmp/volume0

web.additionalVolume

web.additionalVolumeMounts

To add an additional volume and mount, update the settings in the myvalues.yaml file.

web: additionalVolumes: # - name: volume0 # persistentVolumeClaim: # claimName: volume0 additionalVolumeMounts: # - name: volume0 # mountPath: /tmp/volume0

repository.additionalVolume

repository.additionalVolumeMounts

To add an additional volume and mount, update the settings in the myvalues.yaml file.

repository: additionalVolumes: # - name: volume0 # persistentVolumeClaim: # claimName: volume0 additionalVolumeMounts: # - name: volume0 # mountPath: /tmp/volume0

boundAgent.additionalVolume

boundAgent.additionalVolumeMounts

To add an additional volume and mount, update the settings in the myvalues.yaml file.

boundAgent: additionalVolumes: # - name: volume0 # persistentVolumeClaim: # claimName: volume0 additionalVolumeMounts: # - name: volume0 # mountPath: /tmp/volume0

How to reduce ZooKeeper startup time in cluster mode

When deploying CloudBees CD/RO with clusteredMode enabled, Zookeeper pods replicas may fail or restart multiple times before becoming Ready. The root cause of this issue is that pods are scheduled in an orderly manner, and one must be ready before another replica is scheduled.

To reduce initialization times, you can set publishNotReadyAddresses: true in the ZooKeeper service-headless.yaml. This allows DNS records to be published even if pods are not ready, which may help complete discovery before other pods are probed as Ready.

As of CloudBees CD/RO v2023.06.0 and above, these values were updated in the default Helm charts, and you only need to upgrade to one of these versions to integrate the improvements. If you want to continue to use an older CloudBees CD/RO version, follow the steps in Before you start and Configure ZooKeeper to reduce startup time in cluster mode.

Before you start

The following are important steps or prerequisites you should understand or perform before making changes to the CloudBees CD/RO ZooKeeper configuration:

  • There are two procedures that can be used to update the service-headless.yaml in your deployment:

    • You can use kubectl apply to change the current deployment without updating the entire deployment.

      When using kubectl apply, the next instance you run of helm upgrade may revert these changes depending on the version you are upgrading to. If you chose to only use kubectl apply and patch your current deployment, ensure the service-headless.yaml template is updated in your next upgrade cloudbees-flow chart version before running helm upgrade.
    • You can update the service-headless.yaml within your cloudbees-flow Helm charts, and then use helm upgrade to update the entire deployment.

      If you are using helm upgrade you need the cloudbees-flow package of your current deployment, and a copy of any myvalues.yaml that need to be applied to the deployment. If you have a copy of your CloudBees CD/RO version cloudbees-flow package, you can find the files at cloudbees-flow/charts/zookeeper/templates/.

      If you do not have a local copy of your CloudBees CD/RO Helm charts:

      1. Update your cloudbees-flow Helm repository, by running:

        helm repo update cloudbees
      2. To see a list of all available charts, run:

        helm search repo cloudbees/cloudbees-flow -l
      3. Find the APP VERSION of your CloudBees CD/RO deployment and note its CHART VERSION.

      4. To install a local copy of the Helm charts for your CloudBees CD/RO version, run:

        helm pull cloudbees/cloudbees-flow --version <CHART VERSION>
      5. To see where the package is stored, run helm env. The packages are stored as archives in the path for $HELM_REPOSITORY_CACHE.

Configure ZooKeeper to reduce startup time in cluster mode

The following steps describe how to reconfigure your ZooKeeper integration to help stabilize and reduce startup times. There are two methods you can use depending on your requirements:

Both methods and their prerequisites are described in Before you start.

Patch ZooKeeper using Kubectl

There is prerequisite information for this task listed in Before you start. Ensure you understand this information prior to starting.

Using kubectl apply will result in updating the current deployment only, and not the actual ZooKeeper Helm charts. The next time you run helm upgrade, if you do not ensure to apply the changes as described in Update ZooKeeper using Helm or upgrade to CloudBees CD/RO 2023.06.0 or later, you will overwrite these changes.

To reconfigure ZooKeeper using kubectl apply and patch your deployment:

  1. To create a local copy of the zookeeper-headless.yaml, run:

    kubectl get svc zookeeper-headless -n <your-namespace> -o yaml > zookeeper-headless.yaml
  2. Open the zookeeper-headless.yaml template in an editor, and update the values.spec section as follows:

    spec: clusterIP: None ports: {{- range $key, $port := .Values.ports }} - name: {{ $key }} port: {{ $port.containerPort }} targetPort: {{ $key }} protocol: {{ $port.protocol }} {{- end }} publishNotReadyAddresses: true selector: app: {{ template "zookeeper.name" . }} release: {{ .Release.Name }}
  3. To apply the ZooKeeper template to your CloudBees CD/RO deployment, run:

    kubectl apply -f zookeeper-headless.yaml -n <your-namespace>

    This should return the message, service/zookeeper-headless configured.

    1. (Optional) You can check the updated version by running:

      kubectl get svc zookeeper-headless -n <your-namespace> -o yaml
  4. To restart the ZooKeeper service and apply the changes, run:

    kubectl rollout restart deployment/<your-cloudbees-flow-deployment>
  5. (Optional) To monitor the status of the upgrade and the initialzation of the ZooKeeper pods, run:

    kubectl get pods --namespace <your-namespace> --watch

This should help reduced the startup times associated with ZooKeeper and stabilized the installation. If your ZooKeeper errors are returned, review each step to ensure the correct values were used.

Update ZooKeeper using Helm

The prerequisites for this task listed in Before you start and should be performed prior to starting.

To reconfigure ZooKeeper using helm upgrade and update your deployment:

  1. Navigate to your path for $HELM_REPOSITORY_CACHE, and open the package archive for your CloudBees CD/RO version. For more information on finding the $HELM_REPOSITORY_CACHE, refer to Before you start.

  2. In the CloudBees CD/RO package, navigate to cloudbees-flow/charts/zookeeper/templates/.

  3. In your service-headless.yaml template, update the values.spec section as follows:

    spec: clusterIP: None ports: {{- range $key, $port := .Values.ports }} - name: {{ $key }} port: {{ $port.containerPort }} targetPort: {{ $key }} protocol: {{ $port.protocol }} {{- end }} publishNotReadyAddresses: true selector: app: {{ template "zookeeper.name" . }} release: {{ .Release.Name }}
  4. To upgrade your CloudBees CD/RO with the new templates and values files, run:

    helm upgrade <your-release-name> cloudbees-flow \ --namespace <your-namespace> \ # You can use mulitple instances of -f if you have more than one values file. -f <path-to-your-myvalues.yaml> # Helm upgrades exit when all tasks are complete, regardless of timeout. # However, if your database is very large, the timeout may need to be increased to prevent returning a FAILED status. # For instance: --timeout 10800s --timeout 4200s
  5. (Optional) To monitor the status of the upgrade and the initialzation of the ZooKeeper pods, run:

    kubectl get pods --namespace <your-namespace> --watch

This should help reduced the startup times associated with ZooKeeper and stabilized the installation. If your upgrade fails or ZooKeeper errors are returned, review each step to ensure the correct values were used.

How to configure internal and external agents

CloudBees CD/RO installation on Kubernetes uses internal services within the cluster to communicate between components, which ensures that communication within the cluster is efficient and secure. A common method for deploying CloudBees CD/RO on Kubernetes is to install CloudBees CD/RO components inside the Kubernetes cluster while installing worker agents such as deployment targets outside the cluster, such as on VMs and traditional platforms.

k8s topology external agents

In a mixed inside and outside deployment use case, communications with the CloudBees CD/RO components happen with internal services installed on Kubernetes, while communication with CD agents happen outside the cluster. These external agents can communicate with the internal components using a pair of gateway agents.

Gateway agent pairs can be installed using one of these methods:

  • Using the CloudBees CD/RO Helm chart

  • Using the CloudBees CD/RO agent Helm chart

Installing gateway agent pairs using the CloudBees CD/RO Helm chart

  1. Enable both internalGatewayAgent and externalGatewayAgent to enable the CloudBees CD/RO chart to install the agent sub-chart as gateway agents. Make sure you set ingress.host and externalGatewayAgent.service.publicHostName both as the same value:

    ingress: host: my-flow.example.com internalGatewayAgent: enabled: true externalGatewayAgent: enabled: true service: publicHostName: my-flow.example.com
  2. Ensure your default values file has nginx-ingress (Kubernetes versions 1.21 and earlier) or ingress-nginx (Kubernetes versions 1.22 and later) tcp config map settings uncommented, to expose the 7800 gateway agents service:

    nginx-ingress or ingress-nginx: tcp: 8200: "{{ .Release.Namespace }}/flow-repository:8200" 8443: "{{ .Release.Namespace }}/flow-server:8443" 7800: "{{ .Release.Namespace }}/gateway-external-agent-flow-agents:7800"
  3. If you are not creating an ingress controller as part of the CloudBees CD/RO chart because you already have an existing ingress controller make sure you have tcpconfig map added to expose above ports and services (for more details, refer to the Kubernetes documentation):

    helm repo add cloudbees https://public-charts.artifacts.cloudbees.com/repository/public/ helm repo update helm install releaseName cloudbees/cloudbees-flow \ -f valuesFile --namespace nameSpace --timeout 10000s

Installing gateway agents using the CloudBees CD/RO agent Helm chart

To create the gateway agent using the agent chart:

  1. Create the server secret:

    # Flow server credentials to use in order to register # with agent as a resource on the Flow server. kubectl create secret generic flow-agent-secret \ --from-literal=CBF_SERVER_USER='admin' \ --from-literal=CBF_SERVER_PASSWORD=<flow-server-password> \ -n <namespace>
  2. Create a values file for the internal agent called gateway-default-agent.yaml:

    resourceName: gateway-default-agent replicas: 1 flowCredentials: existingSecret: flow-agent-secret service: publicHostName: gateway-default-agent-flow-agents
  3. Install the gateway internal agent using gateway-default-agent.yaml:

    helm install gateway-default-agent cloudbees/cloudbees-flow-agent \ -f gateway-default-agent.yaml -n <namespace>
  4. Create a values file for the external agent called gateway-external-agent.yaml that adds zoneName, publicHostName and gateway configuration:

    resourceName: gateway-external-agent zoneName: external service: # External DNS hostname that the external agents would use to communicate # with the external gateway agent publicHostName: my-flow.example.com # configure gateway using external gateway agent gateway: # configure gateway using this agent enabled: true # Name of the gateway to create name: external # Name of gateway agent to pair with as gateway resource 2 pairedResourceName: gateway-default-agent flowCredentials: existingSecret: flow-agent-secret
  5. Install the gateway external agent using gateway-external-agent.yaml:

    helm install gateway-external-agent cloudbees/cloudbees-flow-agent \ -f gateway-external-agent.yaml -n <namespace>

Autoscaling gateway agents

External gateway agents allow connectivity to be routed through gateways internally within your Kubernetes cluster or externally via an external endpoint. Typically, agents are included within the gateway configuration. As each pod replica is registered individually as a resource, and their gateway agents created 1:1, this creates a single point of failure within each replica.

This model is shown in the following example Kubernetes cluster using an external gateway agent:

external gateway agents
Figure 1. External gateway agents

Although this model generally provides internal and external connectivity, it doesn’t promote the goal of high availability. However, horizontally autoscaling your gateway agents allows your cluster to dynamically handle workloads or the loss of a gateway agent. CloudBees CD/RO provides this option for horizontal pod autoscaling (HPA) with HPA gateway agents.

By default, the CloudBees CD/RO server does not include HPA gateway agents. You must manually configure gateway agent autoscaling.

Configuring HPA gateway agents

Using HPA gateway agents allows you to distribute the cluster workload over one or many agent replicas and provides availability if one fails. The following steps allow your HPA gateway agents to be registered as services, using a service name, instead of the resource name of individual pods.

The CloudBees CD/RO cloudbees-flow Helm chart is available in the public cloudbees-examples repository. CloudBees recommends you save this in a local myvaules.yaml to retain it within your environment.
  1. In your CloudBees CD/RO cloudbees-flow Helm chart, add the following:

    internalGatewayAgent: enabled: true autoscaling: enabled: true minReplicas: 1 maxReplicas: 2 targetCPUUtilizationPercentage: 80 targetMemoryUtilizationPercentage: 80 # cloudbees-flow-agent chart configurations used for creating the # external agent for the gateway externalGatewayAgent: enabled: true autoscaling: enabled: true minReplicas: 1 maxReplicas: 2 targetCPUUtilizationPercentage: 80 targetMemoryUtilizationPercentage: 80
  2. For internalGatewayAgent, configure the following for your cluster:

    1. minReplicas

    2. maxReplicas

    3. targetCPUUtilizationPercentage

    4. targetMemoryUtilizationPercentage

      For HPA gateway agents to work, internalGatewayAgent.enabled and internalGatewayAgent.autoscaling.enabled must be set to true.
  3. For externalGatewayAgent, configure the following for your cluster:

    1. minReplicas

    2. maxReplicas

    3. targetCPUUtilizationPercentage

    4. targetMemoryUtilizationPercentage

      For HPA gateway agents to work, externalGatewayAgent.enabled and externalGatewayAgent.autoscaling.enabled must be set to true.

Autoscaling non-gateway agents

CloudBees CD/RO does not support horizontal autoscaling of non-gateway agents. Attempts to follow these instructions for non-gateway agents will result in an error message for an invalid configuration.

How to use Kubernetes Pod Security Standards

As of Kubernetes version 1.25, Pod Security Standards (PSS) are a built-in feature you can use to enforce hardening policies in your cluster. Acting as a built-in admission controller, PSS allows you to apply consistent baselines by adding labels to your Kubernetes namespaces, as shown in the following cluster example:

PSS replaces deprecated Pod Security Policies to enforce restrictions on pod behavior and should be implemented with Kubernetes clusters with version 1.25 and above.

Pod Security Standards

There are three standards/levels you can configure for pods using the pod security admission controller:

  • Privileged: An allow-by-default policy that provides the widest possible level of permissions. This policy allows known privilege escalations and is typically reserved for privileged, trusted users.

  • Baseline: A minimally restrictive policy that prevents known privilege escalations. This policy allows the minimally-specified default pod configuration and typically is used in relation to non-critical applications.

  • Restricted: A heavily restricted policy that follows current pod hardening best practices. This policy is typically used in relation to security-critical applications and lower-trust users.

For more information, refer to the Kubernetes Pod Security Standards documentation.

These policies are applied to namespaces using labels via different monitoring modes, enforce, warn, and audit. Depending on your needs, you can configure namespace labels with multiple modes, each with their own policies. For example, you can configure namespaces to warn against using restricted policies, but to only enforce baseline policies. This approach allows you to enforce minimal protections, while identifying areas you can improve before enforcing restricted standards.

Applying pod security context in CloudBees CD/RO Helm charts

Security contexts provide parameterization configured for pods at runtime. To use security context in CloudBees CD/RO, you must configure the securityContext in the CloudBees CD/RO server and agent Helm charts.

By default, securityContext.enabled=false is used in CloudBees CD/RO server and agent Helm charts must be configured.

To enable this setting:

  1. Open your myvalues.yaml for cloudbees-flow or cloudbees-flow-agent.

  2. Change securityContext.enabled=false to securityContext.enabled=true.

  3. Configure additional project-specific settings for securityContext as needed:

    Refer to the Kubernetes Pod Security Standards documentation for more information.
    1. The following configurations are available for CloudBees CD/RO v2023.12.0 and later Helm charts:

      Pods security context configurations v2023.12.0 and later
      ### -------------------------------------------- ### Pods security context ### --------------------------------------------- ## Requires `securityContext.enabled=true` to apply `securityContext` settings for pod spec. ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ securityContext: enabled: false fsGroup: 1000 runAsUser: 1000 ## Configure pod security context, which applied to pod spec. ## Ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#podsecuritycontext-v1-core # fsGroup: 1000 # fsGroupChangePolicy: OnRootMismatch # runAsGroup: 1000 # runAsNonRoot: true # runAsUser: 1000 # seLinuxOptions: {} # seccompProfile: {} # supplementalGroups: # sysctls: ## Requires `securityContext.enabled=true` to apply `containerSecurityContext` settings for containers. containerSecurityContext: {} ## Configure container security context, which is applied to containers. ## Ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#securitycontext-v1-core # allowPrivilegeEscalation: false # capabilities: # drop: [ "ALL" ] # privileged: false # procMount: "Default" # readOnlyRootFilesystem: true # runAsGroup: 1000 # runAsNonRoot: true # runAsUser: 1000 # seLinuxOptions: {} # seccompProfile: {}
    2. The following child Helm charts also support security context configurations for CloudBees CD/RO v2023.12.0 and later:

      Zookeeper

      To configure a security context for ZooKeeper, navigate to the ZooKeeper section of your myvales.yaml and add the following fields:

      zookeeper: securityContext: enabled: true fsGroup: 1000 containerSecurityContext: {}
      MariaDB

      To configure a security context for MariaDB, navigate to the MariaDB section of your myvales.yaml and add the following fields:

      mariadb: securityContext: enabled: true fsGroup: 1000 containerSecurityContext: {}
      Bound agents

      To configure a security context for bound agents, navigate to the bound agents section of your myvales.yaml and add the following fields:

      boundAgent: securityContext: enabled: true fsGroup: 1000 containerSecurityContext: {}
      CloudBees CD/RO agents

      To configure a security context for CloudBees CD/RO agents, in your agent myvales.yaml, add the following fields:

      securityContext: enabled: true fsGroup: 1000 runAsUser: 1000 containerSecurityContext: {}

Applying Policy Security Admission to namespaces

To add or modify Kubernetes Policy Security Admissions labels use:

kubectl label --overwrite ns <namespace-to-modify> pod-security.kubernetes.io/<mode-of-coverage>=<policy-level>

Where:

  • <namespace-to-modify>: Is the namespace you want to add or modify the label.

  • <mode-of-coverage>: Is the mode you want to apply, enforce, warn, or audit.

  • <policy-level>: Is the PSS policy you want to apply, privileged, baseline or resricted.

For example, if you create a new namespace called mynamespace to enforce a baseline level and warn at restricted levels:

kubectl create namespace mynamespace kubectl label --overwrite ns mynamespace pod-security.kubernetes.io/enforce=baseline kubectl label --overwrite ns mynamespace pod-security.kubernetes.io/warn=restricted

Applying different Pod Security Standards versions

By default, when a PSS is applied to a mode using pod-security.kubernetes.io/, the latest version of PSS is used. However, you can also specify a specific Kubernetes version of PSS to use for your mode with:

kubectl label --overwrite ns mynamespace pod-security.kubernetes.io/<mode-of-coverage>-version=<K8s-version>

Where:

  • <mode-of-coverage>: Is the mode you want to apply, enforce, warn, or audit.

  • <K8s-version>: Is the Kubernetes version of PSS to use, such as v1.25.

For example, if you create a new namespace called mynamespace to enforce a baseline level using the PSS enforce mode of Kubernetes version 1.25:

kubectl create namespace mynamespace kubectl label --overwrite ns mynamespace pod-security.kubernetes.io/enforce=baseline kubectl label --overwrite ns mynamespace pod-security.kubernetes.io/enforce-version=v1.25

How to configure agent resource name templates

Generally, you set the name for the agent resource with the value agent.resourceName , but this only works when the agent deployment has only one replica. If there are multiple replicas, then each of them tries to register itself using the same resource name. As a result, only one replica from the entire deployment will be registered as a resource.

The resource name defaults to hostname, but other template values are available to fully qualify the resource:

  • Ordinary Helm template values. For example: {{ .Release.Name }}, {{ .Release.Namespace }}, and so on.

  • In addition, two special template values are allowed :

    • {{ hostname }}: The actual resource hostname.

    • {{ ordinalIndex }}: The serial replica index in the StatefulSet.

For example:

resourceName: "myResource - {{ .Release.Name }} - {{ ordinalIndex }}"

How to pre-provision volume snapshots as a PVC in StatefulSets

This how-to describes using a pre-provisioned volume snapshot as a PersistentVolumeClaim (PVC) within a PersistentVolume (PV) for a Kubernetes StatefulSet. The instructions on how to perform these actions may differ depending on your cloud provider, however the general steps are:

  1. Create a PVC manifest.

  2. Create a PV manifest that references your PVC and snapshot.

  3. Apply these manifests to your cluster.

  4. Test your cluster to ensure the PV and PVC are present with the desired values.

Even though this how-to describes specific steps for CloudBees Analytics (flow-devopsinsight/dois), you can modify the steps here to apply to other CloudBees CD/RO components.
  1. Create a PVC manifest (pvc.yaml):

    apiVersion: v1 kind: PersistentVolumeClaim metadata: name: PVCNAME spec: storageClassName: STORAGE_CLASSS accessModes: - ReadWriteOnce resources: requests: storage: STORAGE_SIZE

    For your metadata.name used with a CloudBees Analytics PVC, CloudBees recommends:

    • elasticsearch-data-flow-devopsinsight-0

    This is based on the convention <volume-name>-<pod-name-with-index>, where:

    • <volume-name> is elasticsearch-data as the default .Values.storage.volumes.doisStorage.name.

    • <pod-name-with-index> is flow-devopsinsight-0 as the default pod name for the CloudBees Analytics server with its 0 index.

  2. Create a PV manifest (pv.yaml) that references the PVCNAME from your PVC manifest and your snapshot:

    The following example is based on using GCP’s gcePersistentDisk. Use the format required by your provider to create a reference for your snapshot.
    apiVersion: v1 kind: PersistentVolume metadata: name: PVNAME spec: storageClassName: STORAGE_CLASSS capacity: storage: STORAGE_SIZE accessModes: - ReadWriteOnce claimRef: namespace: NAMESPACE name: PVCNAME # Use the directive from your provider to reference your snapshot gcePersistentDisk: pdName: CLOUD_DISK_NAME fsType: ext4
  3. Update your cluster with the manifest files:

    # Apply the PVC (pvc.yaml) and # PV (pv.yaml) to your cluster: kubectl apply -f pvc.yaml -f pv.yaml
  4. Assign the following variables:

    pvcName ="<PVCNAME-from-pvc.yaml>" pvName ="<PVNAME-from-pv.yaml>"
  5. Check if the PV is available in your cluster and has the desired values:

    kubectl get pv $pvName
  6. Check if the PVC is available in your cluster and has the desired values:

    kubectl get pvc $pvcName

How to configure SMTP proxy servers for CloudBees CD/RO on Kubernetes

For CloudBees CD/RO installations on Kubernetes, you can configure an SMTP proxy server by passing its configuration as server.ecconfigure values in the cloudbees-flow Helm chart. These configurations are added to the CloudBees CD/RO wrapper.conf and applied during installation.

In the wrapper.conf, Kubernetes configurations are located in the 10000 range, and your SMTP proxy configuration must be configured in the 10000 range.

This is an example of an SMTP proxy configuration being passed as server.ecconfigure values in the cloudbees-flow Helm chart:

server: ecconfigure: "--wrapperJavaAdditional=10001=-Dmail.smtp.proxy.host=<your-host> \ --wrapperJavaAdditional=10002=-Dmail.smtp.proxy.password=<your-password> \ --wrapperJavaAdditional=10003=-Dmail.smtp.proxy.user=<your-user> \ --wrapperJavaAdditional=10004=-Dmail.smtp.proxy.port=<your-port>"

How to make ingress-nginx default namespace-specific

CloudBees does not suggest installing multiple instances of CloudBees CD/RO in different namespaces within the same Kubernetes cluster, especially in production environments. This can create multiple naming conflicts among components.

Within your non-production environment, if you must install multiple CloudBees CD/RO instances within different namespaces on the same Kubernetes cluster, by default, this installation fails because of ingress-ngnix naming conflicts. This is caused by each instance using the default naming convention within the flow-server values file instead of a unique name required by each ingress-nginx.controller.ingressClassResource.name instance within a Kubernetes cluster.

An option to fix this error is to use the --set directive as part of the helm install command to provide a unique name for ingress-nginx.controller.ingressClassResource.name in each CloudBees CD/RO instance. To do so, you can adapt the following code sample to your project-specific needs when installing or upgrading a CloudBees CD/RO instance in your non-production environment:

RELEASE="<YOUR-CD/RO-RELEASE>" NAMESPACE="<YOUR-CD/RO-NAMESPACE>" helm install $RELEASE cloudbees/cloudbees-flow \ --values=myvalues.yaml \ --namespace $NAMESPACE \ --set ingress-nginx.controller.ingressClassResource.name=$RELEASE.$NAMESPACE

How to configure graceful termination period for agent pods

For various reasons, Kubernetes clusters may migrate agents pod across nodes, which is normally triggered by a sigterm signal to the current agent pod that immediately terminates it. To provide better monitoring and graceful termination options, support was added to the CloudBees CD/RO Helm charts to configure termination grace periods for agent pods, with a default grace period of 30 seconds.

Options to add graceful termination periods were added in CloudBees CD/RO v2024.03.0 and later. This option is not available in CloudBees CD/RO v2023.12.0 and earlier.

To configure graceful termination for agent pods:

  1. Open your CloudBees CD/RO cloudbees-flow-agent v2024.03.0 or later my-values.yaml.

  2. Search for terminationGracePeriodSeconds.

  3. Set the desired graceful termination period in seconds

    The default value is 30 seconds.
  4. Update your Kubernetes CloudBees CD/RO installation to apply the new configuration.

How to increase VM memory maps

The default configuration of your Linux kernel’s vm.max_map_count may not support the needs of your Docker containers. There are multiple scenarios, such as logging and analytics, where you may need to increase the vm.max_map_count.

To increase the vm.max_map_count in a Docker container, run:

helm upgrade --install node-level-sysctl node-level-sysctl -n kube-system \ --set "parameters.vm\.max_map_count=<replace_with_your_value>"

How to deploy monitoring support

Use Grafana to visualize the Elasticsearch metrics. Issue the following commands to deploy it at your site:

helm install --name prometheus stable/prometheus helm install --name grafana stable/grafana # Note that username is admin and password is the result of following command kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | \ base64 --decode ; echo RandomPassword

How to deploy logging support

In general, each object in a Kubernetes cluster produces its own logs. And usually the user has their own mechanism in place to manage logs from different services in the cluster. Logs from CloudBees CD/RO services and pods can be captured with standard log shipper tools. A sample configuration file for the FileBeat log shipper is provided here.

How to configure agents to share a workspace

Once the first agent is deployed with ReadWriteMany access mode, subsequent agents deployed for the same workspace with storage.volumes.agentWorkspace.existingClaim to true share the first agent’s workspace. The following example shows how to set up flow-agent-1 and flow-agent-2 to share the same workspace, MyWorkspace.

  • Deploy the first agent with storage.volumes.agentWorkspace.accessMode set to ReadWriteMany. This creates the persistent volume claim, setting up the scenario where agents can use the flow-agent-workspace shared workspace.

    helm install flow-agent-1 cloudbees-flow-agent -f <valuesFile> \ --set storage.volumes.agentWorkspace.accessMode=ReadWriteMany \ --set storage.volumes.agentWorkspace.name=MyWorkspace \ --namespace <nameSpace> --timeout 10000
  • Deploy subsequent agents to the same workspace with storage.volumes.agentWorkspace.existingClaim to true.

    helm install flow-agent-2 cloudbees-flow-agent -f <valuesFile>\ --set storage.volumes.agentWorkspace.existingClaim=true \ --set storage.volumes.agentWorkspace.name=MyWorkspace \ --namespace <nameSpace> --timeout 10000

The following table summarizes parameters used to configure a shared agent workspace. You can also refer to Persistent storage for more information.

Parameter Description

storage.volumes.agentWorkspace.accessMode

Define the workspace access mode. Possible values include ReadWriteMany and ReadWriteOnce.

For shared workspaces use ReadWriteMany.

storage.volumes.agentWorkspace.name

The agent workspace name. Use the same name across all agents sharing the same workspace. If not specified, flow-agent-workspace is used.

Specify the same name across all agents that share the workspace.

storage.volumes.agentWorkspace.storage

The amount of storage to allocate.

For shared workspaces, allocate approximately 5 GiB per agent. Increase based on the agent’s requirements.

storage.volumes.agentWorkspace.existingClaim

Whether to use the existing claim for a previously deployed agent to share its workspace.

Set to true to share the existing claim for storage.volumes.agentWorkspace.name.

How to increase memory limits for CloudBees CD/RO components

During periods of high work load, a server component could run out of memory if it requests more memory than is allocated to the JVM. To increase the memory for a component, we have to allocate more memory to the component’s container. Then, depending on the component, the memory allocation for the component running in the container needs to be increased accordingly. Refer to Cluster capacity for default container memory settings.

The following configurations can be used to change the memory allocation for each container and component.

Component Container memory limit Component memory setting Example

CloudBees CD/RO server

server.resources.limits.memory

server.ecconfigure

server.ecconfigure: "--serverInitMemoryMB=4096 --serverMaxMemoryMB=4096"

CloudBees CD/RO web server

web.resources.limits.memory

N/A

Repository server

repository.resources.limits.memory

repository.ecconfigure

ecconfigure: " --repositoryInitMemoryMB=256 --repositoryMaxMemoryMB=512"

CloudBees Analytics server

dois.resources.limits.memory

dois.esRam (heap size in MB for Elasticsearch)

Bound agent

boundAgent.resources.limits.memory

boundAgent.ecconfigure

ecconfigure: "--repositoryInitMemoryMB=256 --repositoryMaxMemoryMB=512"

Bound agents are internal CloudBees CD/RO components

The CloudBees CD/RO bound agent (flow-bound-agent) is an internal component used specifically by CloudBees CD/RO for internal operations. While it is possible to schedule user jobs on bound agents, they are not intended for this purpose, and CloudBees CD/RO agents should be used instead.

If operations other than CloudBees CD/RO internal operations run bound agents, CloudBees CD/RO performance may become unpredictable. Additionally, system requirements for CloudBees CD/RO instances assume that bound agents are used exclusively by CloudBees CD/RO, and are not reliable for instances where user jobs are also running.

Inject new memory limits using helm . Update your local values file (here it is called myvalues.yaml ) with the new values and issue the Helm `upgrade ` command.

helm upgrade <chartName> --name <releaseName> \ -f <valuesFile> --namespace <nameSpace> --timeout 10000

How to resolve an ingress class name conflict

If you have an ingress class name conflict during your upgrade:

  1. Update the default ingress class name for the ingress-nginx controller with the additional values below in your myvalues.yaml.

    ingress: enabled: true host: <your-host.example.com> class: <ingress-class-name> ingress-nginx: enabled: true controller: ingressClassResource: name: <ingress-class-name> nginx-ingress: enabled: false
  2. After you update the class name, patch the existing ingress by running:

kubectl patch ingress/flow-ingress -p '{"spec": {"ingressClassName":"<ingress-class-name>" }}' -n <namespace>

If this does not resolve the naming conflict, ensure the ingress-nginx.ingressClassResource.name tag matches the ingress.class and both are correct. If you update the myvalues.yaml to correct the class names, run the previous command again to resolve the conflict.

How to resolve DSN/SSL issues after resolving ingress class name conflicts

If you have a DNS/SSL issue after resolving the ingress class name conflict, restart your ingress-nginx pod to mount the local certificates:

  1. Get the deployment of the ingress-nginx:

    kubectl get deployments -n <namespace> | grep ingress-nginx

    This returns the ingress-nginx deployment, such as:

    <your-deployment>-ingress-nginx-controller 1/1 1 1 56m
  2. Restart the ingress-nginx deployment:

    kubectl rollout restart deployment <your-deployment>-ingress-nginx-controller -n <namespace>

Your upgrade is complete.

How to resolve volume note affinity conflicts

Sometimes pods can hang in the `pending ` stage with the following error:

x/y nodes are available: y node(s) had volume node affinity conflict.

This can happen when the availability zone for the persistent volume claim is different from the availability zone of the node on which the pod gets scheduled.

A cluster administrator can address this issue by specifying the WaitForFirstConsumer mode, which delays the binding and provisioning of a PersistentVolume ` until a pod using the `PersistentVolumeClaim is created. PersistentVolume s are selected or provisioned conforming to the topology that is specified by the pod’s scheduling constraints.

For more information see this article: https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode

How to configure a MySQL database

Use these instructions to use CloudBees CD/RO with a MySQL database.

  1. Configure the following database parameters in your Helm chart values file:

    database.clusterEndpoint

    Use this option if your database is residing in the same Kubernetes cluster as CloudBees CD/RO. The notation is db-service.namespace. If deploying into the same namespace, the .namespace component can be omitted.

    Default: null

    database.externalEndpoint

    The database endpoint residing outside of the CloudBees CD/RO Kubernetes cluster; this can be the DNS for a cloud hosted database service. The database or schema principal must have full read/write access on that schema.

    Default: null

    database.dbName

    The database instance name.

    Default: null

    database.dbPassword

    The database password.

    Default: null

    database.dbPort

    The port on which the database server is listening.

    Default: null

    database.dbType

    The database type with which CloudBees CD/RO persistence works. For MySQL, use mysql.

    Default: null

    database.dbUser

    Username that the CloudBees CD/RO server uses to access the database on the database server.

    Default: null

    database.existingSecret

    Use this option if you have or are planning to deploy the credential’s secret yourself. The layout has to be the same as that of server-secrets.yaml::dbSecret

    Default: null

  2. Make the MySQL connector jar available during a Helm install or upgrade:

    --set database.mysqlConnector.externalUrl="<connector-url>" --set database.mysqlConnector.enabled=true

    Alternatively, place the following configuration into your Helm chart values file:

    mysqlConnector: enabled: true/false externalUrl: <connector url>

How are data backups and disaster recovery addressed

Kubernetes backup tools such as Heptio’s Velero (previously known as Ark) are used to backup and restore Kubernetes clusters. CloudBees CD/RO services and pods deployed in the cluster can be backed using those standard backup tools. Application data maintained in persistent volumes needs to be backed up at the same time. Tools such as Velero support persistent volume backups.

Here is the list of volumes associated with the CloudBees CD/RO pods:

Container Volumes

flow-server

efs-deployment, logback-config,init-scripts

flow-agent

flow-agent-workspace, logback-config

flow-devopsinsight

elasticsearch-data

flow-repository

efs-repository, logback-config

flow-web

efs-deployment