Introducing Kaniko
Kaniko is a utility that creates container images from a Dockerfile. The image is created inside a container or Kubernetes cluster, which allows users to develop Docker images without using Docker or requiring a privileged container.
Since Kaniko doesn’t depend on the Docker daemon and executes each command in the Dockerfile entirely in the userspace, it enables building container images in environments that can’t run the Docker daemon, such as a standard Kubernetes cluster.
The remainder of this chapter provides a brief overview of Kaniko and illustrates using it in CloudBees CI with a Declarative Pipeline.
How does Kaniko work?
Kaniko looks for the Dockerfile file in the Kaniko context. The Kaniko context can be a GCS storage bucket, an S3 storage bucket, or local directory. In the case of either a GCS or S3 storage bucket, the Kaniko context must be a compressed tar file. Next, if the context contains a compressed tar file, then Kaniko expands it. Otherwise, it starts to read the Dockerfile.
Kaniko then extracts the filesystem of the base image using the FROM
statement in the Dockerfile. It then executes each command in the Dockerfile. After each command completes, Kaniko captures filesystem differences. Next, it applies these differences, if there are any, to the base image and updates image metadata. Lastly, Kaniko publishes the newly created image to the desired Docker registry.
Security
Kaniko runs as an unprivileged container. Kaniko still needs to run as root to be able to unpack the Docker base image into its container or execute RUN
Dockerfile commands that require root privileges.
Primarily, Kaniko offers a way to build Docker images without requiring a container running with the privileged flag, or by mounting the Docker socket directly.
Additional security information can be found under the Security section of the Kaniko documentation. Also, this blog article on unprivileged container builds provides a deep dive on why Docker build needs root access. |
Kaniko parameters
Kaniko has two key parameters. They are the Kaniko context and the image destination. Kaniko context is the same as Docker build context. It is the path Kaniko expects to find the Dockerfile in and any supporting files used in the creation of the image. The destination parameter is the Docker registry where the Kaniko will publish the images. Currently, Kaniko supports hub.docker.com, GCR, and ECR as the Docker registry.
In addition to these parameters, Kaniko also needs a secret containing the authorization details required to push the newly created image to the Docker registry.
Kaniko debug image
The Kaniko executor image uses scratch and doesn’t contain a shell.
The Kaniko project also provides a debug image, gcr.io/kaniko-project/executor:debug
, this image consists of the Kaniko executor image with a busybox shell.
For more details on using the Debug Image, see Debug Image section of the Kaniko documenation. |
Requirements
To run this example, you need the following:
-
A Kubernetes cluster with an installation of CloudBees CI
-
A Docker account or another private Docker registry account
-
Your Docker registry credentials
-
Ability to run
kubectl
against your cluster -
CloudBees CI account with permission to create the new pipeline
Steps
These are the high-level steps for this example:
-
Create a new Kubernetes Secret.
-
Create the Pipeline.
-
Run the Pipeline.
Create a new Kubernetes secret
The first step is to provide credentials that Kaniko uses to publish the new image to the Docker registry.
This example uses kubectl
and a docker.com account.
If you are using a private Docker registry, you can use it instead of docker.com. Just create the Kubernetes secret with the proper credentials for the private registry. |
Kubernetes has a create secret
command to store the credentials for private Docker registries.
Use the create secret docker-registry
kubectl
command to create this secret:
create secret
command$ kubectl create secret docker-registry docker-credentials \ (1) --docker-username=<username> \ --docker-password=<password> \ --docker-email=<email-address>
1 | The name of the new Kubernetes secret. |
Create the Pipeline
Create a new pipeline job in CloudBees CI. In the pipeline field, paste the following Declarative Pipeline:
pipeline { agent { kubernetes { yaml """ kind: Pod spec: containers: - name: kaniko image: gcr.io/kaniko-project/executor:debug imagePullPolicy: Always command: - sleep args: - 9999999 volumeMounts: - name: jenkins-docker-cfg mountPath: /kaniko/.docker volumes: - name: jenkins-docker-cfg projected: sources: - secret: name: docker-credentials (1) items: - key: .dockerconfigjson path: config.json """ } } stages { stage('Build with Kaniko') { steps { container(name: 'kaniko', shell: '/busybox/sh') { sh '''#!/busybox/sh echo "FROM jenkins/inbound-agent:latest" > Dockerfile /kaniko/executor --context `pwd` --destination <docker-username>/hello-kaniko:latest (2) ''' } } } } }
1 | This is where the docker-credentials secret, created in the previous step, is mounted into the Kaniko Pod under /kaniko/.docker/config.json . |
2 | Replace destination with your Docker username such as hello-kaniko . |
Save the new Pipeline job.
Limitations
Kaniko does not use Docker to build the image, thus there is no guarantee that it will produce the same image as Docker would. In some cases, the number of layers could also be different.
Kaniko supports most Dockerfile commands, even multistage builds, but does not support all commands. See the list of Kaniko Issues to determine if there is an issue with a specific Dockerfile command. Some rare edge cases are discussed in the Limitations section of the Kaniko documentation. |
Alternatives
There are many tools similar to Kaniko. These tools build container images using a variety of different approaches.
There is a summary of these tools and others in the comparison with other tools section of the Kaniko documentation. |
Here are links to a few of them: