Troubleshooting CloudBees CI on Google Kubernetes Engine (GKE)

10 minute readTroubleshooting

If your laptop is already set up to interact with the GKE Kubernetes cluster, you may go directly to the general troubleshooting section. Otherwise, follow the next few sections to set up your laptop.

Enable kubectl to interact with GKE-deployed cluster

If the Kubernetes cluster was created using the GKE UI or the CLI gcloud, you will need to perform the following commands on your laptop so that you can interact with that cluster using kubectl.

Authenticate Google account with gcloud

Work with the GKE admin to grant your Google account with access to the following Google API’s:

  • Kubernetes Engine Admin

  • Kubernetes Engine Cluster Admin

Use the following command to authenticate your laptop. By default, the command will launch a browser to facilitate the Google login process. You may also use the --no-launch-browser option to accomplish the same if you are using a headless system.

> gcloud auth application-default login [--no-launch-browser]

Retrieve name of container

Work with your GKE admin to get the name of the Kubernetes cluster. Or you can perform the following command to list all cluster names and choose the correct one to use.

> gcloud container clusters list

Here is an example of the output:

NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS test20180410c us-central1-a 1.8.8-gke.0 35.194.43.68 custom-2-5120 1.8.8-gke.0 2 RUNNING team3-cluster us-east1-b 1.9.4-gke.1 35.196.148.64 n1-standard-2 1.9.4-gke.1 3 RUNNING team2-cluster us-east4-b 1.9.6-gke.0 35.199.21.233 custom-4-16384 1.9.6-gke.0 2 RUNNING team1-cluster us-west1-a 1.9.6-gke.0 104.199.113.196 custom-2-8192 1.9.6-gke.0 1 RUNNING

Make a note of the LOCATION of the cluster. In day-to-day interaction, it will be most convenient if you specify a default zone so that you do not have to specify it for every gcloud command against the cluster. Here is an example of setting the zone for the team1-cluster, i.e., "us-west1-a".

gcloud config set compute/zone us-west1-a

Set laptop environment with cluster info

In this section, the cluster "team1-cluster" will be used in the examples.

To attach the gcloud environment to the cluster:

> gcloud config set container/cluster team1-cluster

To pass the cluster’s credentials to kubectl:

> gcloud container clusters get-credentials team1-cluster

Ensure that kubectl can use Application Default Credentials to authenticate to the cluster:

> gcloud auth application-default login [--no-launch-browser]

Provide admin capability to your Google account:

> kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)

At this point, your should have an working kubectl environment. You can confirm that with the following command:

> kubectl config get-contexts

If this is the first time that you have done these instructions, there should be one active kubectl context. For example:

CURRENT NAME CLUSTER AUTHINFO NAMESPACE * gke_account-name-west1-a_team1-cluster gke_account-name-west1-a_team1-cluster gke_account-name-west1-a_team1-cluster

Note that there is not a default namespace in the above example. Your CloudBees CI installation will likely be installed in a namespace, e.g., "cje". It is most convenient to set the default namespace so that you do not have to specify it every time that you perform a kubectl command against the CloudBees CI cluster.

kubectl config set-context $(kubectl config current-context) --namespace="cje"

kubectl is now ready to interact with the Kubernetes cluster.

There are a number of resources that you can use to troubleshoot a CloudBees CI failure.

In this section we will cover each of these approaches.

Consult the Knowledge Base

The Knowledge Base can be very helpful in troubleshooting problems with CloudBees CI and can be accessed on the CloudBees Support site.

Instances provisioning

Operations center provisioning

  1. Check pod status.

  2. All associated objects are already created: pod, svc, statefulset (1 - 1), ingress, pvc and pv

  3. Check the events related with the pod and associated objects: see table

  4. Jenkins logs

Managed controller provisioning

  1. Connectivity logs. Probably will give you the answer to the problem

  2. Check pod status

  3. All associated objects are already created: pod, svc, statefulset (1 - 1), ingress, pvc and pv

  4. Check the events related with the pod and associated objects: see table

  5. Jenkins logs

Build agent provisioning

  1. Jenkins logs. Probably will give you the answer to the problem

  2. Check k8s events

  3. Review Kubernetes shared cloud item configuration at operations center.

CloudBees CI basic operations

Viewing cluster resources

# Gives you quick readable detail $ kubectl get -a pod,statefulset,svc,ingress,pvc,pv -o wide # Gives you high level of detail $ kubectl get -a pod,statefulset,svc,ingress,pvc,pv -o yaml # Describe commands with verbose output $ kubectl describe <TYPE> <NAME> # Gives you backend storage information $ kubectl get storageclass

Troubleshooting failed builds in pods

To troubleshoot failing builds that use Kubernetes pods for the build agent:

  1. Go to your Kubernetes pod template configuration.

  2. Change the Pod Retention setting to On Failure.

  3. Run your build again.

Make note of the line at the top of the build log of your new build that starts with Agent java-1r1t5 is provisioned from template …​, as that java-1r1t5 is the name of the pod that is being used for this new build.

Wait for that build to fail. The Pod Retention setting leaves the Kubernetes pod running even after your build fails. You can then use the steps in the Pod access section to get direct shell access to the build pod, and you can try running the failed command again. Check the build log to see what command failed and try running that yourself. Involve the maintainer of the source code that failed to build in the diagnostic process.

You have to manually clean up any failed build pods using kubectl delete pod POD_NAME when you have this Pod Retention setting enabled to On Failure, so you should keep this setting enabled only when you are troubleshooting a build issue.

Pod access

# Access to the bash $ kubectl exec <POD_NAME> -i -t -- bash -li controller2-0:/$ ps -ef PID USER TIME COMMAND 1 jenkins 0:00 /sbin/tini -- /usr/local/bin/launch.sh 5 jenkins 1:53 java -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Duser.home=/var/jenkins_home -Xmx1433m -Xms1433m -Djenkins.model.Jenkins.slaveAgentPortEnforce=true -Djenkins.model.Jenkins.slav 481 jenkins 0:00 bash -li 485 jenkins 0:00 ps -ef
# Bash execution command $ kubectl exec <POD_NAME> -- ps -ef PID USER TIME COMMAND 1 jenkins 0:00 /sbin/tini -- /usr/local/bin/launch.sh 5 jenkins 2:05 java -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Duser.home=/var/jenkins_home -Xmx1433m -Xms1433m -Djenkins.model.Jenkins.slaveAgentPortEnforce=true -Djenkins.model.Jenkins.slaveAgentPort=50000 -DMASTER_GRANT_ID=270bd80c-3e5c-498c-88fe-35ac9e11f3d3 -Dcb.IMProp.warProfiles.cje=kubernetes.json -DMASTER_INDEX=1 -Dcb.IMProp.warProfiles=kubernetes.json -DMASTER_OPERATIONSCENTER_ENDPOINT=http://cjoc/cjoc -DMASTER_NAME=controller2 -DMASTER_ENDPOINT=http://cje.support-core.beescloud.k8s.local/controller2/ -jar -Dcb.distributable.name=Docker Common CJE -Dcb.distributable.commit_sha=888f01a54c12cfae5c66ec27fd4f2a7346097997 /usr/share/jenkins/jenkins.war --webroot=/tmp/jenkins/war --pluginroot=/tmp/jenkins/plugins --prefix=/controller2/ 645 jenkins 0:00 ps -ef

Access to the pod logs

kubectl logs -f <POD_NAME>

Pod scale down/up

$ kubectl scale statefulset/controller2 --replicas=0 statefulset "controller2" scaled $ kubectl get -a statefulset -o wide NAME DESIRED CURRENT AGE CONTAINERS IMAGES cjoc 1 1 1d jenkins cloudbees/cloudbees-cloud-core-oc:2.107.1.2 controller1 1 1 2h jenkins cloudbees/cloudbees-core-mm:2.107.1.2 controller2 0 0 36m jenkins cloudbees/cloudbees-core-mm:2.107.1.2

CloudBees CI Cluster resources

In the installation phase of CloudBees CI the following service accounts, roles and roles binding are created.

$ kubectl get sa,role,rolebinding NAME SECRETS AGE sa/cjoc 1 21h sa/default 1 21h sa/jenkins 1 21h NAME AGE roles/master-management 21h roles/pods-all 21h NAME AGE rolebindings/cjoc 21h rolebindings/jenkins 21h

Once the installation is done and the CloudBees CI cluster is already up and running, then we can easily check the status of the most important CloudBees CI resources: pod,statefulset,svc,ingress,pvc and pv.

$ kubectl get pod,statefulset,svc,ingress,pvc,pv NAME READY STATUS RESTARTS AGE po/cjoc-0 1/1 Running 0 21h po/controller1-0 1/1 Running 0 14h NAME DESIRED CURRENT AGE statefulsets/cjoc 1 1 21h statefulsets/controller1 1 1 14h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/cjoc ClusterIP 100.66.207.191 <none> 80/TCP,50000/TCP 21h svc/controller1 ClusterIP 100.67.1.49 <none> 80/TCP,50000/TCP 14h NAME HOSTS ADDRESS PORTS AGE ing/cjoc cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 21h ing/default cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 21h ing/controller1 cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 14h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc/jenkins-home-cjoc-0 Bound pvc-c5cad012-2b69-11e8-80fc-12582571ed5c 20Gi RWO gp2 21h pvc/jenkins-home-controller1-0 Bound pvc-e4b5e473-2ba2-11e8-80fc-12582571ed5c 50Gi RWO gp2 14h NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv/pvc-c5cad012-2b69-11e8-80fc-12582571ed5c 20Gi RWO Delete Bound cje-on-support-core/jenkins-home-cjoc-0 gp2 21h pv/pvc-e4b5e473-2ba2-11e8-80fc-12582571ed5c 50Gi RWO Delete Bound cje-on-support-core/jenkins-home-controller1-0 gp2 14h
See Kubernetes Troubleshoot Clusters for more information.

In the following sections the expected results of different Kubernetes resources are defined. The definition of each Kubernetes resource was taken from Kubernetes official documentation.

Pods

A pod is the smallest and simplest Kubernetes object, which represents a set of running containers on your cluster. A Pod is typically set up to run a single primary container, although a pod can also run optional sidecar containers that add supplementary features like logging. Pods are commonly managed by a Deployment.

The get pod will provide you current applications running in the cluster. Applications which are currently stopped or not deployed will not appear as a pod of the cluster.

$ kubectl get pod NAME READY STATUS RESTARTS AGE po/cjoc-0 1/1 Running 0 21h po/controller1-0 1/1 Running 0 14h
See Kubernetes Debugging Pods for more information.
Pods events

Pod events provide you insights about why a specific pod is failing to start in the cluster. In other words, pod events will tell you the reason why a specific application cannot start or be deployed in the cluster.

The table below summarize the most common pods event which might happen in CloudBees CI.

To get the list of events associated with a given pod you will need to run:

$ kubectl describe pod the_pod_name

For example:

$ kubectl describe pod cjoc-0
Status Events Cause

ImagePullBackOff

The image you are using cannot be found in the Docker registry, or when using a private registry there is no secret configured

Node issues

See below. Get node info with kubectl describe nodes

Pending

Insufficient memory

Not enough memory, either increase the nodes or node size in the cluster or reduce the memory requirement of operations center (yaml file) or controller (under configuration)

Pending

Insufficient cpu

Not enough CPUs, either increase the nodes or node size in the cluster or reduce the CPU requirement of operations center (yaml file) or controller (under configuration)

Pending

NoVolumeZoneConflict

There are no nodes available in the zone where the persistent volume was created, start more nodes in that zone

Pending

CrashLoopBackOff

Find out why the Docker container crashes. The easiest and first check should be if there are any errors in the output of the previous startup, e.g.:

Running but restarting every so often.

describe pod shows Last State: Terminated Reason: OOMKilled Exit Code: 137

The Xmx or MaxRAM JVM parameters are too high for the container memory, try increasing memory limit

Unknown

This usually indicates a bad node, if there are several pods in that node in the same state. Check with `kubectl get pods --all-namespaces -o wide

StatefulSet

A StatefulSet manages the deployment and scaling of a set of Pods and provides guarantees about the ordering and uniqueness of these Pods.

Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.

A StatefulSet operates under the same pattern as any other Controller. You define your desired state in a StatefulSet object and the StatefulSet controller makes any necessary updates to get there from the current state.

$ kubectl get statefulset NAME DESIRED CURRENT AGE statefulsets/cjoc 1 1 21h statefulsets/controller1 1 1 14h

In CloudBees CI, the expected DESIRED and CURRENT status of any application should be 1. Not Jenkins, neither build agents supports more than one instance running at the same time.

Service

A service is the API object that describes how to access applications (such as a set of Pods) and can describe ports and load-balancers.

The access point can be internal or external to the cluster.

$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/cjoc ClusterIP 100.66.207.191 <none> 80/TCP,50000/TCP 21h svc/controller1 ClusterIP 100.67.1.49 <none> 80/TCP,50000/TCP 14h

A service must exist for each application running in the cluster. Otherwise, the service will not be accessible.

See Kubernetes Debugging Services for more information.

Ingress

Ingresses represent the routes to access the applications, where an ingress could be thought of as a Load Balancer.

$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE ing/cjoc cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 21h ing/default cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 21h ing/controller1 cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 14h

The required ingresses for CloudBees CI to work are:

  • A ing/default as the default entry point to the cluster

  • A ing/cjoc ingress for the access to the operations center

  • A ing/<MASTER_ID> ingress for the access to each controller

The product expects these ingresses to be present and so they must not be modified - even to reduce the complexity of scope. Modifying ingresses at the Kubernetes level might produce issues in the product, such as managed controllers becoming unable to communicate correctly with the operations center.

Persistent Volume Claims (PVC)

Persistent volume claims (PVCs) represent the volumes associated which each application running in the cluster.

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc/jenkins-home-cjoc-0 Bound pvc-c5cad012-2b69-11e8-80fc-12582571ed5c 20Gi RWO gp2 21h pvc/jenkins-home-controller1-0 Bound pvc-e4b5e473-2ba2-11e8-80fc-12582571ed5c 50Gi RWO gp2 14h

PVCs events

The table below summarize the most common pods event associated with PVCs that might occur in CloudBees CI.

To obtain the list of events associated with a given pod, run:

$ kubectl describe pvc the_pvc_name

For example:

$ kubectl describe pvc jenkins-home-cjoc-0
Status Events Cause

Pending

no persistent volumes available for this claim and no storage class is set

There is no default storageclass - follow these instructions to set a default storageclass

Persistent Volume (PV)

The persistent volume represents the volumes created in the cluster.

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv/pvc-c5cad012-2b69-11e8-80fc-12582571ed5c 20Gi RWO Delete Bound cje-on-support-core/jenkins-home-cjoc-0 gp2 21h pv/pvc-e4b5e473-2ba2-11e8-80fc-12582571ed5c 50Gi RWO Delete Bound cje-on-support-core/jenkins-home-controller1-0 gp2 14h

Accessing $JENKINS_HOME

Accessing Jenkins Home directory (pod running)

By running the following sequence of commands, you can ascertain the path of the $JENKINS_HOME inside a given pod and a specific CloudBees CI instance.

# Get the location of the $JENKINS_HOME $ kubectl describe pod controller2-0 | grep " jenkins-home " | awk '{print $1}' /var/jenkins_home # Access the bash of a given pod $ kubectl exec controller2-0 -i -t -- bash -i -l controller2-0:/$ cd /var/jenkins_home/ controller2-0:~$ ps -ef PID USER TIME COMMAND 1 jenkins 0:00 /sbin/tini -- /usr/local/bin/launch.sh 5 jenkins 1:46 java -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Duser.home=/var/jenkins_home -Xmx1433m -Xms1433m -Djenkins.model.Jenkins.slaveAgentPortEnforce=true -Djenkins.model.Jenkins.slav 516 jenkins 0:00 bash -i -l 524 jenkins 0:00 ps -ef controller2-0:~$ ps -ef | grep java 5 jenkins 1:46 java -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Duser.home=/var/jenkins_home -Xmx1433m -Xms1433m -Djenkins.model.Jenkins.slaveAgentPortEnforce=true -Djenkins.model.Jenkins.slaveAgentPort=50000 -DMASTER_GRANT_ID=270bd80c-3e5c-498c-88fe-35ac9e11f3d3 -Dcb.IMProp.warProfiles.cje=kubernetes.json -DMASTER_INDEX=1 -Dcb.IMProp.warProfiles=kubernetes.json -DMASTER_OPERATIONSCENTER_ENDPOINT=http://cjoc/cjoc -DMASTER_NAME=controller2 -DMASTER_ENDPOINT=http://cje.support-core.beescloud.k8s.local/controller2/ -jar -Dcb.distributable.name=Docker Common CJE -Dcb.distributable.commit_sha=888f01a54c12cfae5c66ec27fd4f2a7346097997 /usr/share/jenkins/jenkins.war --webroot=/tmp/jenkins/war --pluginroot=/tmp/jenkins/plugins --prefix=/controller2/ 528 jenkins 0:00 grep java # Operations to be done. This is an example $ kubectl cp controller2-0:/var/jenkins_home/jobs/ ./jobs/ tar: removing leading '/' from member names

Accessing Jenkins Home directory (pod not running)

# Stop a pod $ kubectl scale statefulset/controller2 --replicas=0 statefulset "controller2" scaled # Create a new rescue-pod running something with any effect # in the $JENKINS_HOME $ cat <<EOF | kubectl create -f - kind: Pod apiVersion: v1 metadata: name: rescue-pod spec: volumes: - name: rescue-storage persistentVolumeClaim: claimName: jenkins-home-controller2-0 containers: - name: rescue-container image: nginx volumeMounts: - mountPath: "/tmp/jenkins-home" name: rescue-storage EOF # Access to the bash of the rescue-pod $ kubectl exec rescue-pod -i -t -- bash -i -l mesg: ttyname failed: Success root@rescue-pod:/# cd /tmp/jenkins-home/ root@rescue-pod:/tmp/jenkins-home# # Operations to be done. This is an example $ kubectl cp rescue-pod:/tmp/jenkins_home/jobs/ ./jobs/ tar: removing leading '/' from member names # Delete the rescue pod $ kubectl delete pod rescue-pod pod "rescue-pod" deleted # Start the pod $ kubectl scale statefulset/controller2 --replicas=1 statefulset "controller2" scaled

Operations center setup customization

The operations center instance could be configured by either editing cloudbees-core.yml or using the Kubernetes command line.

# Set the memory to 2G $ kubectl patch statefulset cjoc -p '{"spec":{"template":{"spec":{"containers":[{"name":"jenkins","resources":{"limits":{"memory": "2G"}}}]}}}}' statefulset "cjoc" patched
# Set initialDelay to 320 seconds $ kubectl patch statefulset cjoc -p '{"spec":{"template":{"spec":{"containers":[{"name":"jenkins","livenessProbe":{"initialDelaySeconds":320}}]}}}}' statefulset "cjoc" patched
# Set timeout to 10 seconds $ kubectl patch statefulset cjoc -p '{"spec":{"template":{"spec":{"containers":[{"name":"jenkins","livenessProbe":{"timeoutSeconds":10}}]}}}}' statefulset "cjoc" patched

Performance issues - high CPU/blocked threads

Expanding managed controller disk space

A managed controller may run out of disk space when the provisioned storage is insufficient for the users of that controller. Kubernetes persistent volumes are fixed sizes defined when the volume is created. Expanding the space for a managed controller requires a restore of the existing managed controller into a persistent volume for a new (larger) managed controller.

Use these steps to restore a backup of an existing managed controller to a new (larger) managed controller:

  1. On the controller that needs more space, run the backup job

  2. Stop the controller in operations center

  3. delete the Persistent Volume Claim in Kubernetes.

    $ kubectl delete pvc jenkins-home-<<controller name>>-0
  4. Change the size of the controller’s volume in the configuration.

  5. Start the controller again in operations center

  6. Create a restore job on the controller to restore from the backup made in step 1

  7. Run the restore job, watch the console output of the job, there will be a link to restart the controller after the restore completes

  8. After the expanded controller restarts, verify the jobs on it

Troubleshooting steps for diagnosing an unhealthy Kubernetes node

While CloudBees is not a Kubernetes vendor, our clients often encounter problems in their Kubernetes Cluster that present themselves as issues with CloudBees software.

Follow these steps to help diagnose a Kubernetes node that is not healthy:

  1. Run the following commands to check which Kubernetes nodes each pod is running on, and see if there is a correlation between which node is running each pod and which pods are encountering an issue:

    $ kubectl -n cloudbees-core get node,pod -o wide
  2. Next, the following command will provide more information about a node or pod that is experiencing problems:

    kubectl describe node <name> kubectl describe pod <name>
  3. Use the steps from the Kubernetes documentation for the Monitor Node Health to run the node problem detector.

If you are able to determine a Kubernetes node with an issue, then follow the documentation to Safely Drain a Node while Respecting the PodDisruptionBudget.

In August 2020, the Jenkins project voted to replace the term master with controller. We have taken a pragmatic approach to cleaning these up, ensuring the least amount of downstream impact as possible. CloudBees is committed to ensuring a culture and environment of inclusiveness and acceptance - this includes ensuring the changes are not just cosmetic ones, but pervasive. As this change happens, please note that the term master has been replaced through the latest versions of the CloudBees documentation with controller (as in managed controller, client controller, team controller) except when still used in the UI or in code.