Issue
For users who ran CloudBees Core on Modern Cloud Platforms version 2.150.3.2 or CloudBees Core on Modern Cloud Platforms version 2.150.2.3, the default container USER
was removed, so the container was running as the root
user, and some files will be written to the JENKINS_HOME
that will be owned by the root
user. All Kubernetes variants except for OpenShift are affected, since OpenShift always schedules containers using a generated UID.
You may not notice this issue when running the 2.150.3.2
or the 2.150.2.3
release, but when you upgrade to a newer release, you may see the following error in the Kubernetes Pod logs for your CloudBees Jenkins Operations Center, or any of your controllers:
kubectl logs cjoc-0 + touch /var/jenkins_home/copy_reference_file.log touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions? + echo 'Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?' + exit 1
Another sample of errors:
ln: failed to create symbolic link ‘/var/jenkins_home/configure-jenkins.groovy.d’: Permission denied cp: cannot create directory ‘/var/jenkins_home/configure-jenkins.groovy.d’: Permission denied
The specific files can be different, but you will likely find error messages such as:
-
Can not write to ... Wrong volume permissions?
-
cannot touch ... Permission denied
-
failed to create ... Permission denied
-
cannot create ... Permission denied
Environment
-
CloudBees CI (CloudBees Core) on modern cloud platforms - Managed controller versions 2.150.3.2 and 2.150.2.3
-
CloudBees CI (CloudBees Core) on modern cloud platforms - Operations Center versions 2.150.3.2 and 2.150.2.3
-
All Kubernetes variants except for OpenShift are affected, since OpenShift always schedules containers using a generated UID.
Resolution
The fix is documented under Known issues
in the release notes for CloudBees Core on Modern Cloud Platforms version 2.164.1.2.
-
On your local machine, create a file called
patch-permissions.yaml
with the following contents:
kind: StatefulSet spec: template: spec: containers: - name: jenkins securityContext: runAsUser: 1000 initContainers: - name: init-chown image: alpine env: - name: JENKINS_HOME value: /var/jenkins_home - name: MARKER value: .cplt2-5503 - name: UID value: '1000' command: - sh - -c - if [ ! -f $JENKINS_HOME/$MARKER ]; then chown $UID:$UID -R $JENKINS_HOME; touch $JENKINS_HOME/$MARKER; chown $UID:$UID $JENKINS_HOME/$MARKER; fi volumeMounts: - mountPath: "/var/jenkins_home" name: "jenkins-home"
-
From your local machine, execute the following patch command on the Kubernetes cluster:
kubectl patch statefulset.apps/cjoc -p "$(cat patch-permissions.yaml)"
-
On each affected controller, go into the Core UI and select controller > Configure > Advanced Configuration YAML, and add the same YAML code:
kind: StatefulSet spec: template: spec: containers: - name: jenkins securityContext: runAsUser: 1000 initContainers: - name: init-chown image: alpine env: - name: JENKINS_HOME value: /var/jenkins_home - name: MARKER value: .cplt2-5503 - name: UID value: '1000' command: - sh - -c - if [ ! -f $JENKINS_HOME/$MARKER ]; then chown $UID:$UID -R $JENKINS_HOME; touch $JENKINS_HOME/$MARKER; chown $UID:$UID $JENKINS_HOME/$MARKER; fi volumeMounts: - mountPath: "/var/jenkins_home" name: "jenkins-home"
-
Restart each controller after applying the changes.