Frequently asked questions

5 minute readTroubleshooting

How external clients access CloudBees CD/RO components inside an OpenShift cluster

The following endpoints must be accessible by external clients.

  • flow-repository:8200

  • flow-server:8443

Clients running outside a cluster access these endpoints via gateway agents. However, on platforms like OpenShift where Ingress isn’t supported or doesn’t support exposing non-web, TCP ports, you configure the required endpoints on an external load balancer service using the following parameters.

--set server.externalService.enabled=true --set repository.externalService.enabled=true

By default, these parameters are false.

To get the DNS of the external exposed load balancer, run the following commands, where <namespace> is the release namespace.

export DEPLOY_NAMESPACE=<namespace> SERVER_LB_HOSTNAME=$(kubectl get service flow-server-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$SERVER_LB_HOSTNAME:8443/" REPO_LB_HOSTNAME=$(kubectl get service flow-repository-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$REPO_LB_HOSTNAME:8200/"

Enabling CloudBees CD/RO to work with a pod security policy (PSP)

In summary, to enable a pod security policy in CloudBees CD/RO:

  • The vm.max_map_count Linux kernel parameter must be set to 262144 in every Docker container.

  • PSP must be enabled.

Follow these steps enable PSP in your CloudBees CD/RO Kubernetes cluster.

  1. Clone node-level-sysctl. This is used to set non-namespaced Linux kernel parameters.

  2. Issue these commands:

    helm upgrade --install node-level-sysctl node-level-sysctl -n kube-system \ --set podSecurityPolicy.enabled=true \ --set "parameters.vm\.max_map_count=262144"

How to deploy monitoring support

Use Grafana to visualize the Elasticsearch metrics. Issue the following commands to deploy it at your site:

helm install --name prometheus stable/prometheus helm install --name grafana stable/grafana # Note that username is admin and password is the result of following command kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | \ base64 --decode ; echo RandomPassword

How to deploy logging support

In general, each object in a Kubernetes cluster produces its own logs. And usually the user has their own mechanism in place to manage logs from different services in the cluster. Logs from CloudBees CD/RO services and pods can be captured with standard log shipper tools. A sample configuration file for the FileBeat log shipper is provided here.

How to configure agents to share a workspace

Once the first agent is deployed with ReadWriteMany access mode, subsequent agents deployed for the same workspace with storage.volumes.agentWorkspace.existingClaim to true share the first agent’s workspace. The following example shows how to set up flow-agent-1 and flow-agent-2 to share the same workspace, MyWorkspace.

  • Deploy the first agent with storage.volumes.agentWorkspace.accessMode set to ReadWriteMany. This creates the persistent volume claim, setting up the scenario where agents can use the flow-agent-workspace shared workspace.

    helm install flow-agent-1 cloudbees-flow-agent -f <valuesFile> \ --set storage.volumes.agentWorkspace.accessMode=ReadWriteMany \ --set storage.volumes.agentWorkspace.name=MyWorkspace \ --namespace <nameSpace> --timeout 10000
  • Deploy subsequent agents to the same workspace with storage.volumes.agentWorkspace.existingClaim to true.

    helm install flow-agent-2 cloudbees-flow-agent -f <valuesFile>\ --set storage.volumes.agentWorkspace.existingClaim=true \ --set storage.volumes.agentWorkspace.name=MyWorkspace \ --namespace <nameSpace> --timeout 10000

The following table summarizes parameters used to configure a shared agent workspace. You can also refer to Persistent storage for more information.

Parameter Description

storage.volumes.agentWorkspace.accessMode

Define the workspace access mode. Possible values include ReadWriteMany and ReadWriteOnce.

For shared workspaces use ReadWriteMany.

storage.volumes.agentWorkspace.name

The agent workspace name. Use the same name across all agents sharing the same workspace. If not specified, flow-agent-workspace is used.

Specify the same name across all agents that share the workspace.

storage.volumes.agentWorkspace.storage

The amount of storage to allocate.

For shared workspaces, allocate approximately 5 GiB per agent. Increase based on the agent’s requirements.

storage.volumes.agentWorkspace.existingClaim

Whether to use the existing claim for a previously deployed agent to share its workspace.

Set to true to share the existing claim for storage.volumes.agentWorkspace.name.

How to increase memory limits for CloudBees CD/RO components

During periods of high work load, a server component could run out of memory if it requests more memory than is allocated to the JVM. To increase the memory for a component, we have to allocate more memory to the component’s container. Then, depending on the component, the memory allocation for the component running in the container needs to be increased accordingly. Refer to Cluster capacity for default container memory settings.

The following configurations can be used to change the memory allocation for each container and component.

Component Container memory limit Component memory setting Example

CloudBees CD/RO server

server.resources.limits.memory

server.ecconfigure

server.ecconfigure: "--serverInitMemoryMB=4096 --serverMaxMemoryMB=4096"

CloudBees CD/RO web server

web.resources.limits.memory

N/A

Repository server

repository.resources.limits.memory

repository.ecconfigure

ecconfigure: " --repositoryInitMemoryMB=256 --repositoryMaxMemoryMB=512"

CloudBees Analytics server

dois.resources.limits.memory

dois.esRam (heap size in MB for Elasticsearch)

dois.lsInitRam , dois.lsMaxRam (heap size in MB for Logstash)

Bound agent

boundAgent.resources.limits.memory

boundAgent.ecconfigure

ecconfigure: "--repositoryInitMemoryMB=256 --repositoryMaxMemoryMB=512"

Inject new memory limits using helm . Update your local values file (here it is called myvalues.yaml ) with the new values and issue the Helm `upgrade ` command.

helm upgrade <chartName> --name <releaseName> \ -f <valuesFile> --namespace <nameSpace> --timeout 10000

How to resolve volume note affinity conflicts

Sometimes pods can hang in the `pending ` stage with the following error:

x/y nodes are available: y node(s) had volume node affinity conflict.

This can happen when the availability zone for the persistent volume claim is different from the availability zone of the node on which the pod gets scheduled.

A cluster administrator can address this issue by specifying the WaitForFirstConsumer mode, which delays the binding and provisioning of a PersistentVolume ` until a pod using the `PersistentVolumeClaim is created. PersistentVolume s are selected or provisioned conforming to the topology that is specified by the pod’s scheduling constraints.

For more information see this article: https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode

How to configure a MySQL database

Use these instructions to use CloudBees CD/RO with a MySQL database.

  1. Configure the following database parameters in your Helm chart values file:

    database.clusterEndpoint

    Use this option if your database is residing in the same Kubernetes cluster as CloudBees CD/RO. The notation is db-service.namespace. If deploying into the same namespace, the .namespace component can be omitted.

    Default: null

    database.externalEndpoint

    The database endpoint residing outside of the CloudBees CD/RO Kubernetes cluster; this can be the DNS for a cloud hosted database service. The database or schema principal must have full read/write access on that schema.

    Default: null

    database.dbName

    The database instance name.

    Default: null

    database.dbPassword

    The database password.

    Default: null

    database.dbPort

    The port on which the database server is listening.

    Default: null

    database.dbType

    The database type with which CloudBees CD/RO persistence works. For MySQL, use mysql.

    Default: null

    database.dbUser

    Username that the CloudBees CD/RO server uses to access the database on the database server.

    Default: null

    database.existingSecret

    Use this option if you have or are planning to deploy the credential’s secret yourself. The layout has to be the same as that of server-secrets.yaml::dbSecret

    Default: null

  2. Make the MySQL connector jar available during a Helm install or upgrade:

    --set database.mysqlConnector.externalUrl="<connector-url>" --set database.mysqlConnector.enabled=true

    Alternatively, place the following configuration into your Helm chart values file:

    mysqlConnector: enabled: true/false externalUrl: <connector url>

Data backups and disaster recovery

Kubernetes backup tools such as Heptio’s Velero (previously known as Ark) are used to backup and restore Kubernetes clusters. CloudBees CD/RO services and pods deployed in the cluster can be backed using those standard backup tools. Application data maintained in persistent volumes needs to be backed up at the same time. Tools such as Velero support persistent volume backups.

Here is the list of volumes associated with the CloudBees CD/RO pods:

Container Volumes

flow-server

efs-deployment, logback-config,init-scripts

flow-agent

flow-agent-workspace, logback-config

flow-devopsinsight

elasticsearch-data

flow-repository

efs-repository, logback-config

flow-web

efs-deployment