Tekton Pipelines - the Nix way

Posted on September 30, 2019

In this post, we discuss about a CI/CD pipeline to build and deploy a Docker image in Kubernetes with Nix and Tekton Pipelines. I’m actually interested in providing a simple CI/CD to deploy Nix expressions (because of Nix composability and reproducibility) in an existing Kubernetes cluster.

Tekton Pipelines is a CI/CD project providing k8s-style resources for declaring CI/CD-style pipelines: the CI/CD is entirely executed in the cluster and its configuration data are custom Kubernetes resources (no need for external databases or volumes).

The Tekton tutorial consists of defining a pipeline with two tasks to build and deploy a Docker image. In our example, we define the same kind of pipeline with the following differences:

We consider the deployment of a trivial Go web application that responds hello-world. The CI/CD pipeline we will define consists of two tasks:

  1. build and push a Docker image (source)
  2. deploy this image (source)

The poc-tekton Git repository contains all the material used in the following.

Build the container image

The default Tekton tutorial uses Kaniko, a Docker image to build a Dockerfile and push the image from a Kubernetes cluster. To build a Docker image from a Nix expression, we use the image n2k8s which relies on the Nix Docker tooling to build an image and uses Skopeo to push this image to a registry.

n2k8s differs from Kaniko on how the tag of the built image is defined: the tag is based on the hash of the built image output path. For non Nix users, this hash is computed from all build requires (glic, Go,…) of our Go application and can be computed without having to build the image (more details here).

For instance, if the Nix output path of an image is

$ nix-build

then the tag of the image is bhvhf4ndzxgqnlwzv5s272i5xs2qs6v0.

This tag has several interesting properties:

So, we define a task to build and push our application by using the n2k8s image. It looks similar to the Tekton task defined in their tutorial (which uses Kaniko). Here is a fragment of this task (the whole task):

kind: Task
    - name: build-and-push
      image: docker.io/lewo/n2k8s:latest
        - /entrypoint
        - --context
        - $(inputs.params.pathToContext)
        - --destination
        - $(outputs.resources.builtImage.url)
        - --image-manifest-filepath
        - /builder/home/image-outputs/builtImage/index.json

The entrypoint script takes three parameters:

Deploy this image

In the deployment task (deploy-using-kubectl) of the Tetkon tutorial, there is a first step (replace-image) to update the deployment file with the reference of the image built by the previous task. The deployment file is updated in place and deployed with kubectl. This is really convenient because we don’t have to manually update the deployment file. However, the deployed file is not tracked in the Git repository.

In order to have a single source of truth, we would prefer to commit the deployment file with the correct image reference. Fortunately, because the image tag based on the Nix output path hash is reproducible, we can set the expected image tag in the deployment file.

Before pushing changes to the CI, we need to update the deployment file with the new hash. Unfortunately, it’s easy to forget to do that :/

To avoid this issue, we add a step to check if the image reference in the committed deployment file corresponds to the image built by the previous task. This is implemented by the step check-if-image-tag-is-up-to-date in the following fragment of task (the whole task):

kind: Task
      - name: source
        type: git
      - name: image
        type: image
      - name: path
        type: string
        description: Path to the manifest to apply
    - name: check-if-image-tag-is-up-to-date
      image: nixery.dev/shell/yq/jq/skopeo/findutils
      command: ["bash"]
        - "-c"
        # From the deployment file   , extract the image reference,                         , query the registry with this reference     , and verify the digest is equal to the digest from the previous task
        - "cat $(inputs.params.path) | yq -rs '.[0].spec.template.spec.containers[0].image' | xargs -I '{}' skopeo inspect docker://'{}' | jq -e '.Digest == \"$(inputs.resources.image.digest)\"'"
    - name: run-kubectl
      image: nixery.dev/shell/kubectl
      command: ["kubectl"]
        - "apply"
        - "-f"
        - "$(inputs.params.path)"

Thanks to this step, the CI/CD ensures the committed image reference is the one that has been built by the previous task. Since the deployment is not updated, there is no mutation realized by the CI/CD (they are hard to locally reproduce for debugging or testing purposes).

Oh, and a nice detail! To run the check-if-image-tag-is-up-to-date step, we need a container providing yq, jq and skopeo. In our CI/CD context, we would need to build and publish it (or be enough lucky to find such kind of images on the Docker Hub).

We are instead pulling our image from the nixery, a registry building and serving images on-demand based on the image name. Every package that the user intends to include in the image is specified as a path component of the image name. For instance, the image


is an image containing yq, jq, skopeo.