Tekton Pipelines - the Nix way

In this post, we discuss about a CI/CD pipeline to build and deploy a Docker image in Kubernetes with Nix and Tekton Pipelines. I’m actually interested in providing a simple CI/CD to deploy Nix expressions (because of Nix composability and reproducibility) in an existing Kubernetes cluster.

Tekton Pipelines is a CI/CD project providing k8s-style resources for declaring CI/CD-style pipelines: the CI/CD is entirely executed in the cluster and its configuration data are custom Kubernetes resources (no need for external databases or volumes).

The Tekton tutorial consists of defining a pipeline with two tasks to build and deploy a Docker image. In our example, we define the same kind of pipeline with the following differences:

  • we use Nix instead of the Docker tooling
  • there is no CI/CD tasks modifying deployment files
  • we pull pipeline step container images from the nixery

We consider the deployment of a trivial Go web application that responds hello-world. The CI/CD pipeline we will define consists of two tasks:

  1. build and push a Docker image (source)
  2. deploy this image (source)

The poc-tekton Git repository contains all the material used in the following.

Build the container image

The default Tekton tutorial uses Kaniko, a Docker image to build a Dockerfile and push the image from a Kubernetes cluster. To build a Docker image from a Nix expression, we use the image n2k8s which relies on the Nix Docker tooling to build an image and uses Skopeo to push this image to a registry.

n2k8s differs from Kaniko on how the tag of the built image is defined: the tag is based on the hash of the built image output path. For non Nix users, this hash is computed from all build requires (glic, Go,…) of our Go application and can be computed without having to build the image (more details here).

For instance, if the Nix output path of an image is

$ nix-build
/nix/store/bhvhf4ndzxgqnlwzv5s272i5xs2qs6v0-docker-image-hello-world.tar.gz

then the tag of the image is bhvhf4ndzxgqnlwzv5s272i5xs2qs6v0.

This tag has several interesting properties:

  • it allows aggressive caching: if this tag already exists in the registry, n2k8s doesn’t build and push the image again
  • it doesn’t rely on bit-per-bit reproducible binaries (unlike an image digest)
  • it can be computed without having to build the image

So, we define a task to build and push our application by using the n2k8s image. It looks similar to the Tekton task defined in their tutorial (which uses Kaniko). Here is a fragment of this task (the whole task):

kind: Task
spec:
  ...
  steps:
    - name: build-and-push
      image: docker.io/lewo/n2k8s:latest
      command:
        - /entrypoint
      args:
        - --context
        - $(inputs.params.pathToContext)
        - --destination
        - $(outputs.resources.builtImage.url)
        - --image-manifest-filepath
        - /builder/home/image-outputs/builtImage/index.json
      ...

The entrypoint script takes three parameters:

  • context is the path of the default.nix file
  • destination contains the name of the image (only the registry and the image name)
  • image-manifest-filepath is an optional parameter to specify where the image manifest has to be written. This manifest is used by Tekton to export the image digest to other tasks.

Deploy this image

In the deployment task (deploy-using-kubectl) of the Tetkon tutorial, there is a first step (replace-image) to update the deployment file with the reference of the image built by the previous task. The deployment file is updated in place and deployed with kubectl. This is really convenient because we don’t have to manually update the deployment file. However, the deployed file is not tracked in the Git repository.

In order to have a single source of truth, we would prefer to commit the deployment file with the correct image reference. Fortunately, because the image tag based on the Nix output path hash is reproducible, we can set the expected image tag in the deployment file.

Before pushing changes to the CI, we need to update the deployment file with the new hash. Unfortunately, it’s easy to forget to do that :/

To avoid this issue, we add a step to check if the image reference in the committed deployment file corresponds to the image built by the previous task. This is implemented by the step check-if-image-tag-is-up-to-date in the following fragment of task (the whole task):

kind: Task
spec:
  inputs:
    resources:
      - name: source
        type: git
      - name: image
        type: image
    params:
      - name: path
        type: string
        description: Path to the manifest to apply
  steps:
    - name: check-if-image-tag-is-up-to-date
      image: nixery.dev/shell/yq/jq/skopeo/findutils
      command: ["bash"]
      args:
        - "-c"
        # From the deployment file   , extract the image reference,                         , query the registry with this reference     , and verify the digest is equal to the digest from the previous task
        - "cat $(inputs.params.path) | yq -rs '.[0].spec.template.spec.containers[0].image' | xargs -I '{}' skopeo inspect docker://'{}' | jq -e '.Digest == \"$(inputs.resources.image.digest)\"'"
    - name: run-kubectl
      image: nixery.dev/shell/kubectl
      command: ["kubectl"]
      args:
        - "apply"
        - "-f"
        - "$(inputs.params.path)"

Thanks to this step, the CI/CD ensures the committed image reference is the one that has been built by the previous task. Since the deployment is not updated, there is no mutation realized by the CI/CD (they are hard to locally reproduce for debugging or testing purposes).


Oh, and a nice detail! To run the check-if-image-tag-is-up-to-date step, we need a container providing yq, jq and skopeo. In our CI/CD context, we would need to build and publish it (or be enough lucky to find such kind of images on the Docker Hub).

We are instead pulling our image from the nixery, a registry building and serving images on-demand based on the image name. Every package that the user intends to include in the image is specified as a path component of the image name. For instance, the image

nixery.dev/shell/yq/jq/skopeo

is an image containing yq, jq, skopeo.

Conclusion

  • the n2k8s image can build and push a Docker image defined by a Nix expression
  • the image output path hash can be used as an image tag to reduce CI/CD code mutations
  • nixery is a nice project to quickly define a container image used by a step

Notes

  • In a Nix world, the deployment file could be generated with Kubenix. In this case, it’s trivial to reference the image without any human action.
  • The image provided by nixery is not reproducible since we don’t pin nixpkgs in the URL (it would be nice to be able to pin nixpkgs)
  • The check-if-image-tag-is-up-to-date step could be simplified a lot if Tetkon could expose the tag in the image resource. This is not the case yet and it’s why we have to query the registry.
  • n2k8s could push derivations to a binary cache to avoid rebuilding all derivations when the image changes.