What you need to know when using Kaniko from Kubernetes Jenkins Agents

Article ID:360031223512
4 minute readKnowledge base

Issue

  • I am building docker images with kaniko but some files are missing in the produced image

  • Kaniko fails to build images with error similar to rm: cannot remove '<file>': Device or resource busy

Explanation

Kaniko is a solution that enables to build docker images without a docker daemon. To be able to achieve this, it constructs the image layers in the kaniko container userspace as the execution goes, see the Design Overview.

The way the execution works is important to avoid unexpected issues when using the kaniko image. In short, the kaniko executor extract the file system of the base image to root, snapshot every layer and add them to the base image. See Build Execution and Snapshotting for more details.

An important part of the snapshotting is the exclusion of certain directories from the built image - like for example the /proc, /sys and /var/run/secrets directories. And above all the fact that any volume mounted to the kaniko container is automatically excluded from the snapshot.

For example, if the /home/jenkins is mounted from a volume in the kaniko container, building the image https://github.com/jenkinsci/docker-agent/blob/9baedd76ad1829f05b5ca80107c77ea921e48b3a/alpine/Dockerfile would result in an image without any content under /home/jenkins or errors if some operations in the Dockerfile conflict with what already exist under /home/jenkins (what is mounted to the kaniko exists in the container of the image build).

The kaniko image is built from scratch and is designed for this specific use case. It is best and recommended to use that image instead of adding the kaniko executor to custom images

Resolution

When building docker image from kaniko, be aware of the volumes that are mounted to the kaniko container and ensure it does not conflict with the image being built.

Example

The following example demonstrate the behavior described above about how kaniko process snapshots and may excludes directories from the image layers.

Let’s build an image from the following Dockerfile using kaniko from a kubernetes agent pod:

FROM jenkins/agent
MAINTAINER CloudBees Support Team <support@cloudbees.com>
RUN mkdir /home/jenkins/.m2

The pipeline looks like the following:

pipeline {
  agent {
    kubernetes {
      label 'example-kaniko-volumes'
      yaml """
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: jnlp
    workingDir: /home/jenkins
  - name: kaniko
    workingDir: /home/jenkins
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: Always
    command:
    - /busybox/cat
    tty: true
    volumeMounts:
      - name: jenkins-docker-cfg
        mountPath: /kaniko/.docker
  volumes:
  - name: jenkins-docker-cfg
    projected:
      sources:
      - secret:
          name: docker-credentials (1)
          items:
            - key: .dockerconfigjson
              path: config.json
"""
    }
  }
  stages {
    stage('Build with Kaniko') {
      environment {
        PATH = "/busybox:/kaniko:$PATH"
      }
      steps {
        container(name: 'kaniko', shell: '/busybox/sh') {

          writeFile file: "Dockerfile", text: """
            FROM jenkins/agent
            MAINTAINER CloudBees Support Team <support@cloudbees.com>
            RUN mkdir /home/jenkins/.m2
          """

          sh '''#!/busybox/sh
            /kaniko/executor --context `pwd` --verbosity debug --destination cloudbees/jnlp-from-kaniko:latest
          '''
        }
      }
    }
  }
}

The build succeed but the image produced does not contain the /home/jenkins/.m2 directory:

cloudbees$ docker run -ti --rm cloudbees/jnlp-from-kaniko:latest ls -la /home/jenkins
total 32
drwxr-xr-x 1 jenkins jenkins 4096 Aug  1 04:24 .
drwxr-xr-x 1 root    root    4096 Aug  1 04:24 ..
-rw-r--r-- 1 jenkins jenkins  220 May 15  2017 .bash_logout
-rw-r--r-- 1 jenkins jenkins 3526 May 15  2017 .bashrc
drwxr-xr-x 2 jenkins jenkins 4096 May 31 10:32 .jenkins
-rw-r--r-- 1 jenkins jenkins  675 May 15  2017 .profile
drwxr-xr-x 2 jenkins jenkins 4096 May 31 10:32 agent

If we look at the logs, we can see records like:

[...]
[37mDEBU[0009] Not adding /home/jenkins because it is whitelisted
[...]
[36mINFO[0015] RUN mkdir /home/jenkins/.m2
[36mINFO[0015] cmd: /bin/sh
[36mINFO[0015] args: [-c mkdir /home/jenkins/.m2]
[36mINFO[0015] Taking snapshot of full filesystem...
[37mDEBU[0015] Skipping paths under /kaniko, as it is a whitelisted directory
[37mDEBU[0015] Skipping paths under /home/jenkins, as it is a whitelisted directory
[37mDEBU[0015] Skipping paths under /var/run, as it is a whitelisted directory
[37mDEBU[0015] Skipping paths under /var/jenkins_config, as it is a whitelisted directory
[37mDEBU[0015] Skipping paths under /dev, as it is a whitelisted directory
[37mDEBU[0015] Skipping paths under /proc, as it is a whitelisted directory
[37mDEBU[0015] Skipping paths under /sys, as it is a whitelisted directory
[37mDEBU[0015] Skipping paths under /busybox, as it is a whitelisted directory

The /home/jenkins/.m2 is not added to the image because /home/jenkins is excluded from the snapshot. There is a conflict:

  • the Dockerfile manipulates /home/jenkins

  • The kubernetes agent pod uses /home/jenkins as working directory (and it is mounted to all containers in the pod at that location)

To solve that problem, let’s change the working directory of the kubernetes agent containers to /tmp/jenkins instead of /home/jenkins:

pipeline {
  agent {
    kubernetes {
      label 'example-kaniko-volumes'
      yaml """
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: jnlp
    workingDir: /tmp/jenkins
  - name: kaniko
    workingDir: /tmp/jenkins
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: Always
    command:
    - /busybox/cat
    tty: true
    volumeMounts:
      - name: jenkins-docker-cfg
        mountPath: /kaniko/.docker
  volumes:
  - name: jenkins-docker-cfg
    projected:
      sources:
      - secret:
          name: docker-credentials (1)
          items:
            - key: .dockerconfigjson
              path: config.json
"""
    }
  }
  stages {
    stage('Build with Kaniko') {
      environment {
        PATH = "/busybox:/kaniko:$PATH"
      }
      steps {
        container(name: 'kaniko', shell: '/busybox/sh') {

          writeFile file: "Dockerfile", text: """
            FROM jenkins/agent
            MAINTAINER CloudBees Support Team <support@cloudbees.com>
            RUN mkdir /home/jenkins/.m2
          """

          sh '''#!/busybox/sh
            /kaniko/executor --context `pwd` --verbosity debug --destination cloudbees/jnlp-from-kaniko:latest
          '''
        }
      }
    }
  }
}

The build succeeds and the .m2 directory is now present:

cloudbees$ docker run -ti --rm cloudbees/jnlp-from-kaniko:latest ls -la /home/jenkins
total 32
drwxr-xr-x 1 jenkins jenkins 4096 Aug  1 04:24 .
drwxr-xr-x 1 root    root    4096 Aug  1 04:24 ..
-rw-r--r-- 1 jenkins jenkins  220 May 15  2017 .bash_logout
-rw-r--r-- 1 jenkins jenkins 3526 May 15  2017 .bashrc
drwxr-xr-x 2 jenkins jenkins 4096 May 31 10:32 .jenkins
drwxr-xr-x 2 jenkins root    4096 Aug  1 04:24 .m2
-rw-r--r-- 1 jenkins jenkins  675 May 15  2017 .profile
drwxr-xr-x 2 jenkins jenkins 4096 May 31 10:32 agent

(Note: since Kubernetes Plugin 1.18 (and the fix for JENKINS-58705 the default working directory has changed from /home/jenkins to /home/jenkins/agent)

Troubleshooting

In order to troubleshoot such kind of issues, add the flag --verbosity debug to the kaniko executor command - i.e. /kaniko/executor --verbosity debug ... and search for the file system path related to the failure in the logs.