Build fail on Kubernetes agent after upgrading the kubernetes plugin to 1.18.0 or later

Article ID:360038408831
2 minute readKnowledge base

Issue

After upgrading to CloudBees Core version 2.176.4.3 or newer, or Kubernetes plugin version 1.18.0 or later, your builds which use their own Docker containers start to fail with "permission denied" or similar errors accessing files within /home/jenkins.

Explanation

Kubernetes plugin version 1.18.0 changed the default Working Directory for agents from /home/jenkins to /home/jenkins/agent. This change was made because the Kubernetes plugin uses an emptyDir volume to mount the agent remote directory. Users who built their own containers and put data in the /home/jenkins directory couldn’t access that data because it was masked when the working directory was mounted on top of it. See [JENKINS-58705] for more on this issue.

In some specific cases - where the working directory was specifically set for containers - this may be a breaking change. As this could result in containers having the working directory mounted in different location. A general rule is that all containers in the pod should have their workingDir pointing to the same location. And if the workingDir is not explicitly set, it will be set to the same value as the jnlp container.

Improvements to the Kubernetes plugin in version 1.22.1 may remove this limitation. Refer to JENKINS-58975 for more on this.

Resolution

Make sure that all containers have their working directory (i.e. workingDir) pointing to the same location.

Workaround

Revert back to using /home/jenkins as the working directory

A workaround is to change the working directory of pod containers back to /home/jenkins. This workaround is only possible when using YAML to define agent pod templates (see JENKINS-60977).

This could expose you to the same data masking issues previously mentioned, which were the motivation for changing this default. For a long term solution, we recommend taking the steps described in the above Resolution section.