Using Kubedeploy for easy k8s deployments

Kubedeploy

Using Kubedeploy for easy k8s deployments

Today we are happy to announce the general availability of our Kubedeploy Helm chart.

Initially, we started out writing Helm charts for every application. Soon we realized there is a repeating pattern when deploying single-container applications. Most of the applications had same requirements and maintaining multiple charts for each application didn’t make any sense. At that point, we started working on the Kubedeploy Helm chart that would ease up deploying new applications while eliminating the need to maintain multiple charts per application. You should be able to modify any application-specific configuration by changing the chart parameters via values.yaml

Over time as requirements grew, we evolved the chart to cover more and more use cases. Soon we onboarded a few of our clients who started using this same chart. With their valuable feedback, we extended the chart’s functionality to cover many other use cases we didn’t initially require.

Kubedeploy’s primary mission remained the same: simplify deploying single-container applications into Kubernetes clusters.
This chart is by no means here to replace any complex deployment scenarios. However, it can be a great starting point for developing your own chart based on it.

Without further ado, let’s explore some of the use cases and capabilities of Kubedeploy.

Requirements

To follow along with this post, you will need to install Helm and helmfile on your workstation and have basic familiarity with those tools. You will also need a working K8s cluster with credentials to deploy applications within a namespace. To test things out, you can use Kind cluster (as described in this post) or deploy this chart directly to your production or staging cluster.

Adding the repository

To use the chart, you must add Sysbee’s Helm chart repository to your workstation.

 

helm repo add sysbee https://charts.sysbee.io/stable/sysbee
helm repo update

If you try to install this chart into the cluster, you will find that it will, by default, deploy an Nginx container.

Deploying our application

This should be as simple as deploying our own image and tag within our custom custom-values.yaml file

 

image:
  repository: my-docker-repository
  tag: latest

And then deploy it with helm

 

helm upgrade --install my-release sysbee/kubedeploy -f custom-values.yaml

All available configuration options are listed on the chart’s home page, or you can extract default values from the chart itself and make modifications that suit your application.

 

helm show values sysbee/kubedeploy > values.yaml

Features to consider

By default, chart will deploy your container as Kubernetes Deployment, with defined service listening on port 80, targeting the container port named http (not configured by default).

To name a few other options: you can deploy container as Statefulset or Job (triggered only once), define your persistent volume, environment variables, ingress host, etc. This list will probably expand over time, so the best place to look at all the features and configuration options is on the chart’s home page, or you can extract default values from the chart itself and make modifications that suit your application.

Even though this chart is designed to deploy simple single-container applications, you can still use it multiple times to deploy multiple containers and/or define relationships via sub-charts or helmfile.

Example project

To illustrate better, we will be deploying an actual application using kubedeploy.

I like cooking and trying out new dishes. A lifelong problem of mine was tracking all the recipes and storing them for later use. Luckily there is an excellent open-source project called Mealie that we will be deploying today.

Mealie will store all your recipes, generate your shopping list based on ingredients, and even generate a weekly meal plan.

Simple deployment

For defining custom configuration values while deploying Helm charts I like to use helmfile. This way, I don’t need to track and update the whole values file when updating between chart versions.

I’ll create a helmfile.yaml with the following content:

 

---
# Here we define all the Helm repositories used within this helmfile
repositories:
  - name: sysbee
    url: https://charts.sysbee.io/stable/sysbee

# Define all our Helm releases
releases:
  - name: mealie  # this will be the name of release as seen by Helm
    namespace: default
    chart: sysbee/kubedeploy
    version: 0.7.1
    installed: true
    # Define our custom values here
    values:
      - image:
          repository: hkotel/mealie
          tag: "v0.5.6"

Now all that’s left to do is deploy it to the cluster:

 

helmfile --file helmfile.yaml apply

Configuring the deployment

Mealie is now deployed in the cluster; it’s time to configure its runtime via environment variables and expose its port so we can access it.
The first thing we need to do is add an exposed port for our container in helmfile.yaml

 

      # define container ports
      - ports:
        - name: http
          containerPort: 80
          protocol: TCP

Since we named our port http there is an automatic service object definition within the chart that will target this port and listen on ClusterIP on port 80.

To configure environment variables for container, we use:

 

      # define container env vars
      - env:
        - name: DB_TYPE
          value: "sqlite"
        - name: TZ
          value: "Europe/Zagreb"
        - name: DEFAULT_EMAIL
          value: my_email@address.com
        - name: TOKEN_TIME
          value: "720"

Now let’s apply this new configuration.

 

helmfile --file helmfile.yaml diff

helmfile diff subcommand will use helm-diff plugin to display what would change

 

helmfile diff output on first mealie deployment via Kubedeploy

Keen observer might notice that once we define http port for our container, the chart will automatically add liveness and readiness probe for this container. If you don’t wish to add health check probes, you can disable them by configuring health check.enabled = false within the values

 

      - healthcheck:
          enabled: false

Or you can even define your own probes and target URL’s, for example:

 

      - healthcheck:
          enabled: true
          probes:
            livenessProbe:
              httpGet:
                port: http
                path: /healthy
            readinessProbe:
              httpGet:
                port: http
                path: /ready

For now, we will stick to automatically created health check probes and deploy our configuration

 

helmfile --file helmfile.yaml apply

Our port should now be exposed, and our service object should accept traffic. We still don’t have this service exposed to the public, but we can verify by doing a simple port forward:

 

kubectl port-forward service/mealie-kubedeploy 8091:80

And then visiting http://localhost:8091/ in our browser

Exposing it to the public

Exposing this application to the public is only a matter of defining ingress resources within chart custom values.

Your cluster must have some ingress controller installed and configured. Installing and configuring an ingress controller is a topic for another blog post so let’s assume you have it installed and stick to the chart’s values.

 

      # map domain to this container
      - ingress:
          enabled: true
          # define your custom ingress annotations
          annotations:
            kubernetes.io/ingress.class: haproxy
            cert-manager.io/cluster-issuer: letsencrypt

          hosts:
            - host: mealie.example.com
              paths:
                - path: /
          tls:
            - secretName: mealie-ingress-cert
              hosts:
                - mealie.example.com

This snippet will configure HAProxy ingress controller to expose this application on mealie.example.com domain and have the SSL cert installed via cert-manager’s Letsencrypt issuer.

Again we need to deploy it to our cluster

 

helmfile --file helmfile.yaml apply

Data persistency

Currently, the application is deployed as Kubernetes Deployment without any persistent storage. Mealie will, by default, save images and its SQLite database in /app/data directory.

When you deploy a new version or make any modifications to our kubedeploy configuration values, Kubernetes will re-deploy our entire container, its pod will get a new name, and all the data stored while the pod was executing will be lost.
This is normal, and for most of the applications, this poses no issue. In some cases, we wish to persist all data written to the disk.

We can do this by changing our deployment mode to Statefulset and enabling persistence.

 

      - deploymentMode: Statefulset

      # define file storage volume
      - persistency:
          enabled: true
          capacity:
            storage: 5Gi
          mountPath: "/app/data"

And again let’s apply our configuration

 

helmfile --file helmfile.yaml apply

You will now have a persistent volume mounted at /app/data so that any pod re-deployments will have access to previous data stored.

Extending it further

As I mentioned, kubedeploy is intended to be used with single-container applications. By using helmfile we can combine multiple applications and form dependencies. For example, Mealie can also use PostgreSQL as its database type instead of SQLite.

Let’s see what we need to do to have this configured.

First, let’s add PostgreSQL helm chart repository to our helmfile.yaml.

Sure we could, in theory, deploy PostgreSQL with our kubedeploy, but since there are more specialized charts, let’s not re-invent the wheel.

For this purpose, we will use Bitnami’s PostgreSQL chart

 

---
# Here we define all the Helm repositories used within this helmfile
repositories:
  - name: sysbee
    url: https://charts.sysbee.io/stable/sysbee
  - name: bitnami
    url: https://charts.bitnami.com/bitnami

Then, let’s create new release for our pg database

 

  - name: mealie-pg
    namespace: default
    chart: bitnami/postgresql
    installed: true
    values:
      - global:
          postgresql:
            auth:
              # used for upgrading the chart version
              postgresPassword: master-user-pass
      - auth:
          username: mealie
          password: change-me
          database: mealie

 

Word of caution! For the sake of simplicity, passwords in this example are kept as plain text in the deployment file. For production, please see that you use the secret store.

Now finaly, we can change the DB_ENGINE environment variable and add additional PG connection string variables to our primary release.

 

      # define container env vars
      - env:
        - name: DB_TYPE
          value: "postgres"
        - name: POSTGRES_USER
          value: mealie
        - name: POSTGRES_PASSWORD
          value: change-me
        - name: POSTGRES_DB
          value: mealie
        - name: POSTGRES_SERVER
          value: mealie-pg-postgresql

Let’s also define a relationship between releases:

 

    # define reference to other release
    # namespace/release-name
    needs:
      - default/mealie-pg

Putting it all together

As we did significant modifications to our helmfile.yaml over course of this post, here is the complete file for reference:

 

---                   
# Here we define all the Helm repositories used within this helmfile
repositories:
  - name: sysbee
    url: https://charts.sysbee.io/stable/sysbee
  - name: bitnami
    url: https://charts.bitnami.com/bitnami

# Define all our Helm releases
releases:
  - name: mealie  # this will be the name of release as seen by Helm
    namespace: default
    chart: sysbee/kubedeploy
    version: 0.7.1
    installed: true
    # Define our custom values here
    values:
      - image:
          repository: hkotel/mealie
          tag: "v0.5.6"

      # define container ports
      - ports:
        - name: http
          containerPort: 80
          protocol: TCP

      # define container env vars
      - env:
        - name: DB_TYPE
          value: "postgres"
        - name: POSTGRES_USER
          value: mealie
        - name: POSTGRES_PASSWORD
          value: change-me 
        - name: POSTGRES_DB
          value: mealie
        - name: POSTGRES_SERVER
          value: mealie-pg-postgresql
        - name: TZ
          value: "Europe/Zagreb"
        - name: DEFAULT_EMAIL
          value: my_email@address.com
        - name: TOKEN_TIME 
          value: "720"

      # map domain to this container
      - ingress:
          enabled: true
          # define your custom ingress annotations
          annotations:
            kubernetes.io/ingress.class: haproxy
            cert-manager.io/cluster-issuer: letsencrypt

          hosts:
            - host: mealie.example.com
              paths:
                - path: /
          tls:
            - secretName: mealie-ingress-cert
              hosts:
                - mealie.example.com

      - deploymentMode: Statefulset

      # define file storage volume
      - persistency:
          enabled: true
          capacity:
            storage: 5Gi
          mountPath: "/app/data"

    # define reference to other release
    # namespace/release-name
    needs:
      - default/mealie-pg

  - name: mealie-pg
    namespace: default
    chart: bitnami/postgresql
    installed: true
    values:
      - global:
          postgresql:
            auth:
              # used for upgrading the chart version
              postgresPassword: master-user-pass
      - auth:
          username: mealie 
          password: change-me
          database: mealie 

 

 

All that is left to do is to apply all the changes, and we are good to go:

 

helmfile --file helmfile.yaml apply

 

 

Once you are done with all the testing, you can easily remove all applications deployed with helmfile by issuing the following:

 

helmfile --file helmfile.yaml delete

Summary

We hope this post and kubedeploy will help you deploy your application in your cluster. Our intention is to further develop kubedeploy with new features and simplified configuration, retaining backwards compatibility and low-level configurability.

If you would like to propose any new features or use cases, drop us a line down in the comments. In case you require any assistance in defining your deployment process, managing the Kubernetes cluster, feel free to contact us.

Share this post