Centralize log storage for servers

4 minute readReferenceAutomation

In Kubernetes deployments of CloudBees CD/RO, running multiple server pods is standard practice for scalability and high availability. However, monitoring logs for each can be complicated, since by default, they write to default pod-specific locations.

To centralize logging without modifying application code, you can configure the pods to write to a shared PersistentVolumeClaim (PVC), with each pod writing to its own folder inside the volume. This approach enables consistent, isolated log storage across all pods and supports integrations with external logging tools—such as Fluentd and Fluent Bit—and storage backends like Amazon S3 and other S3-compatible systems.

This guide covers how to:

Create a shared PVC

This first step is to define a PersistentVolumeClaim (PVC) that uses a shared file system, such as the nfs-client storage class, which supports ReadWriteMany (RWX) access mode.

If your cluster already has a suitable PVC, you can reuse it. However, to keep logs organized and maintainable, CloudBees recommends creating a dedicated PVC specifically for logging.

To use an existing PVC, skip the following steps, and proceed to Configure Helm values for PVC mounting.

  1. Create a values file to use for PVC, such as:

    For the following examples/commands, this file is named flow-logs-pvc.yaml. If you use a different name, adjust the examples/commands as required.
    apiVersion: v1 kind: PersistentVolumeClaim metadata: name: flow-logs-pvc spec: accessModes: - ReadWriteMany storageClassName: nfs-client resources: requests: storage: 10Gi

    Where:

    • metadata.name: The name of the PVC.

      The following examples and commands use flow-logs-pvc. If you choose a different name, update references accordingly.
    • spec.accessModes: ReadWriteMany: Allows simultaneous write access by multiple pods.

    • spec.storageClassName: nfs-client: Enables dynamic provisioning of NFS-backed shared storage.

    • spec.resources.requests.storage: The requested size of the PVC.

      In this example, five servers each write ~50MB per day, resulting in a total of ~250MB/day. With a 30-day log retention policy, this amounts to 250MB × 30 days = 7.5GiB. Adding a 2.5GiB buffer brings the total request to 10GiB.

      If you have more servers, higher log volume, or a longer retention policy, adjust the value accordingly.

  2. Create the PVC using one of the following methods:

    • kubectl (Kubernetes CLI):

      kubectl example
      kubectl apply -f flow-logs-pvc.yaml kubectl get pvc flow-logs-pvc

      You should see the PVC status as Bound once it is successfully created.

    • oc (OpenShift CLI)

      oc example
      oc apply -f flow-logs-pvc.yaml oc get pvc flow-logs-pvc

      You should see the PVC in the Bound state once it is ready.

    • OpenShift Web Console

      OpenShift Web Console example
      1. Navigate to Storage  PersistentVolumeClaims.

      2. Select Create PersistentVolumeClaim.

      3. Fill in the following details:

        • Name: flow-logs-pvc

        • Access mode: ReadWriteMany

        • Requested size: 10Gi (or adjust as needed)

        • Storage Class: nfs-client (or another dynamic shared storage class)

      4. Select Create.

Now that the PVC has been created, you can mount it in your CloudBees CD/RO server Helm chart by proceeding to Configure Helm values for PVC mounting.

Configure Helm values for PVC mounting

The next step in configuring a shared logging PVC for your CloudBees CD/RO servers is to update your CloudBees CD/RO Helm chart values.yaml to mount the PVC. The following instructions create a unique directory per pod within the PVC where the log files are stored.

You must complete the following instructions for each CloudBees CD/RO server in your environment. If you skip these steps for any server, it will not be able to write to the shared PVC.
  1. Open your CloudBees CD/RO flow-server values file, and navigate to the server settings (### Flow server configuration section).

  2. Add or update the following to the server chart:

    server: additionalVolumes: - name: flow-logs persistentVolumeClaim: claimName: flow-logs-pvc additionalVolumeMounts: - name: flow-logs mountPath: /opt/cbflow/logs subPathExpr: $(POD_NAME) extraEnvs: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumesPermissionsInitContainer: enabled: false

    Where:

    • additionalVolumes.persistentVolumeClaim. claimName: The name of the PVC you created in Create a shared PVC, or an existing suitable PVC.

      If using an existing PVC, it must have ReadWriteMany access.
    • subPathExpr: $(POD_NAME): Mounts a pod-specific subdirectory from the shared PVC.

    • POD_NAME: An environment variable injected with the pod name.

    • volumesPermissionsInitContainer: must be enabled: false, because POD_NAME is not available in init containers.

  3. Run your Helm upgrade command to upgrade your CloudBees CD/RO deployment with the new values.

After your upgrade is complete, you can view the server logs by following the instructions in Retrieve server logs.

Retrieve server logs

After deployment:

  • Each CloudBees CD/RO server pod writes logs to a dedicated directory inside the shared PVC. The directory name matches the pod name.

  • When using subPathExpr: $(POD_NAME), each pod mounts only its own subdirectory. From inside the pod, log files are written directly to /opt/cbflow/logs.

To access logs for a specific server:

Command
Response
kubectl exec -it <pod-name> -- /bin/sh cd /opt/cbflow/logs ls
commander-flow-server-<pod-name>.log commander-service.log events.log setupScripts.log
Since the volume is mounted using subPathExpr: $(POD_NAME), the pod will only see its own logs. Other server directories are not visible from within the container.

To centrally access logs from all servers, mount the shared PVC without subPathExpr, such as in a diagnostics or log-collector pod:

bash-5.1$ ls flow-server-7795454788-5gs4w flow-server-7795454788-snvbl flow-server-78448874df-sdwfs flow-server-7795454788-6m4p4 flow-server-78448874df-ch6gk bash-5.1$ cd flow-server-7795454788-5gs4w/ bash-5.1$ ls commander-flow-server-7795454788-5gs4w.log commander-service.log events.log setupScripts.log