Testing Kubernetes deployments with KIND

KIND testing Kubernetes deployments

Testing Kubernetes deployments with KIND

You might be developing Helm charts for deploying your production application on your multi-cluster environment, or you are just starting with Kubernetes. Chances are, you will want to test your deployments or skills somewhere. If you are lucky, you might already have testing and staging environments set up for you by your IT department. If you are not one of the lucky ones, we will drop a few words about testing Kubernetes deployments, so read on.

In this blog post, we will be exploring KIND.

KIND – (Kubernetes In Docker) is a tool for running local Kubernetes clusters using Docker container “nodes”.

It was primarily designed for testing Kubernetes deployments itself but may be used for local development or CI.

 

First steps in using Kubernetes in Docker

You can start using KIND even on your desktop/laptop computer, given that you have Docker installed. You need to install or download KIND binary on your computer, and you are ready to go!

macOS

$ brew install kind

Windows

$ choco install kind

Linux

$ curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
$ chmod +x ./kind
$ mv ./kind /some-dir-in-your-PATH/kind

 

Now that you have KIND installed, you can set up your first cluster just by executing:

$ kind create cluster

And the output will show something like this:

 ✓ Ensuring node image (kindest/node:v1.23.4) ?
 ✓ Preparing nodes ?  
 ✓ Writing configuration ? 
 ✓ Starting control-plane ?️ 
 ✓ Installing CNI ? 
 ✓ Installing StorageClass ? 
Set kubectl context to "kind-kind"

You can now use your cluster with:
kubectl cluster-info --context kind-kind

Have a nice day! ?

The command above will create one local k8s cluster named “kind”. Starting the cluster for the first time might take a couple of minutes while all the necessary Docker images are downloaded and the cluster bootstrapped.

 

After the cluster is provisioned, kind will also update your local kubeconfig by adding and switching to a new context. You will have full admin rights for the newly created cluster. From this point on, you can interact with the local cluster just like any other.

$ kubectl get nodes

NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 26m v1.23.4

It’s worth saying that you are not limited to just one cluster. You can have multiple local clusters running as long as you have available resources. 

 

To get a list of all available clusters, you can use

$ kind get clusters

 

To remove the KIND cluster, simply run: 

$ kind delete cluster --name kind

 

Additional options for testing Kubernetes deployments

By default, kind will create a cluster consisting of only one control-plane node. You can still deploy your workloads on this single node.

Oftentimes you might be writing Helm charts or deployments that will have specific pod affinities configured. They may, for example, require multiple nodes (spreading out for HA).

Consider this deployment:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-cache
spec:
  selector:
    matchLabels:
      app: store
  replicas: 3
  template:
    metadata:
      labels:
        app: store
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - store
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: redis-server
        image: redis:3.2-alpine

 

In the example above, you will need at least three nodes to progress this deployment successfully.

With KIND, we can define a three worker node cluster by creating a cluster-3nodes.yaml file containing:

---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kind-3


nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

 

We can then create a cluster by specifying this config file as an optional argument.

$ kind create cluster --config cluster-3nodes.yaml

 

There are many more configurable options to choose from. For example, you can expose node ports that enable you to run Ingress controllers or define extra host mounts to worker nodes to persist data from deployments.

A word of caution! Even though KIND is configurable, it’s by no means a replacement for production Kubernetes clusters. Once you create a cluster, you can not easily modify its configuration. This limitation is by design, after use, we will throw-away KIND cluster.

Practical use case for testing Kubernetes deployments

We are ready to start testing Kubernetes deployments. As already mentioned, local disposable clusters are excellent playgrounds. You can use them to learn how to interact with Kubernetes without fear of breaking something in production. It’s disposable and reproducible so that you can tackle the same or different problems multiple times.

Another use case that comes to mind is using it in CI/CD pipelines to perform integration tests of your deployments. For example, testing if your Helm chart will successfully deploy all components and do what it’s supposed to.

Here is a sample GitLab CI file:

---
stages:
  - lint
  - cluster-init
  - deploy
  - verify
  - cleanup

variables:
  CHART_NAME: sample-chart
  RELEASE_NAME: testing

.runner:
  tags:
    - docker
    - kind

helmlint:
  extends: .runner
  stage: lint
  script:
    - helm lint ${CHART_NAME}

clusterinit:
  extends: .runner
  stage: cluster-init
  script:
    - kind create cluster --name ${CI_COMMIT_SHORT_SHA}

deploychart:
  extends: .runner
  stage: deploy
  script:
    - kind export kubeconfig --name ${CI_COMMIT_SHORT_SHA}
    - helm upgrade --install ${RELEASE_NAME} ${CHART_NAME} --wait --timeout 300s

verify:
  extends: .runner
  stage: verify
  script:
    - kind export kubeconfig --name ${CI_COMMIT_SHORT_SHA}
    - kubectl wait --for=condition=available --timeout=300s deployment/${RELEASE_NAME}-${CHART_NAME}
    - kubectl get deployment ${RELEASE_NAME}-${CHART_NAME} -n default -o=jsonpath='{.status.replicas}' |grep 1

cleanup:
  extends: .runner
  stage: cleanup
  script:
    - kind delete cluster --name ${CI_COMMIT_SHORT_SHA}
  when: always

 

This simplified pipeline runs on a shell executor with Docker installed, and GitLab runner allowed access to Docker. You will most likely want to extend or modify this pipeline to your environment. At the very least, you will need to change the verify stage to reflect your integration testing needs.

 

Let’s go through the file:

stages:

In line 2, we define our pipeline stages, and in line 9, we will define some variables so the rest of the pipeline can be kind of reusable.

lint stage:

The first job helmlint is running inside the lint stage, and it will report any issues with the chart syntax early on. If you have any additional linters or tests that you wish to run your chart against, you can add them all in the lint stage so they may run in parallel before we get to integration testing.

cluster-init stage:

At line 24, the clusterinit job will provision our KIND cluster named after this specific commit’s short SHA. This way, we ensure we have one throw-away cluster per pipeline, preventing any issues that might arise if multiple similar pipelines were to run in parallel.

deploy stage:

After the cluster is initialized, at line 30 deploychart job will export kubeconfig for this specific cluster and then install our sample-chart as a testing release in our KIND cluster. We are using –wait flag so that the next pipeline stage will wait for Helm to finish. Our –timeout flag is here to ensure short-circuiting if we have some misconfiguration causing endless loops or conditions that can’t be met.

verify stage:

In our verify job at line 37, we export the kubeconfig and do two simple tests. The first test will wait until deployment is in an available state. Granted, this is somewhat redundant with Helm’s –wait parameter as it does the same thing. The second test is to ensure we have precisely one replica provisioned.

cleanup stage:

In the end, we will be running a cleanup job to remove the entire kind cluster. Here we must ensure that this job always runs as we don’t want to have lingering clusters on our runners in case of failed deployment or verify stages.

Summary

KIND is an excellent tool for testing out Kubernetes deployments. While this post only covers some basics, I strongly recommend that you check out KIND’s web page for more information on configuration options and use cases. To anyone starting with Kubernetes, this tool is a great resource for learning, testing, and developing your skills. To any advanced users, KIND will be a great tool to have in their arsenal for integration testing, debugging, and evaluating your application.

 

Share this post