Issue
-
A pod fails to start and the
kubectl describe pod <podName>
shows a mount failure similar to the following:Warning Failed 58m (x4 over 59m) kubelet, <hostname> Error: failed to start container "jenkins": Error response from daemon: linux mounts: Path /mnt/jenkins/data/cjoc-pv1 is mounted on /mnt/jenkins but it is not a shared or slave mount.
Environment
-
Kubernetes 1.10 and 1.10.1
-
Ubuntu
Explanation
Kubernetes version 1.10 added support for the Mount Propagation feature.
In Kubernetes 1.10.0, the default was first set to "HostToContainer" (rslave
). Consequently, all volume mounts in containers are rslave
on Linux by default. This caused a regression in several cases. Ubuntu in particular is impacted as this distribution does not mount the root directory as rshared
. Per the release note:
MountPropagation feature is now beta. As a consequence, all volume mounts in containers are now rslave on Linux by default. To make this default work in all Linux environments the entire mount tree should be marked as shareable, e.g. via mount --make-rshared /. All Linux distributions that use systemd already have the root directory mounted as rshared and hence they need not do anything. In Linux environments without systemd we recommend running mount --make-rshared / during boot before docker is started, (@jsafrane)
In Kubernetes 1.10.2, the default was set to "None" (private
) to fix this problem.
Resolution
Upgrade Kubernetes to version 1.10.2 or later.
Workaround
The workaround is to ensure that the host mount impacted is mounted as rshared
in kubernetes hosts. For example, when seeing the following stacktrace:
Warning Failed 58m (x4 over 59m) kubelet, myhost1 Error: failed to start container "jenkins": Error response from daemon: linux mounts: Path /mnt/jenkins/data/cjoc-pv1 is mounted on /mnt/jenkins but it is not a shared or slave mount.
The /mnt/jenkins
must be mounted as rshared
on all hosts:
mount --make-rshared /mnt/jenkins
Make sure this is done on the hosts startup. |