Overlays over templating

Alex Woods

Alex Woods

July 13, 2020


As Kubernetes grew in popularity, people started looking for effective ways to manage their application manifests, their declarative descriptions of the Kubernetes resources they needed to run their apps on a cluster.

There's a bit of difficulty in that — how can I use the same app manifests to target multiple environments? How can I produce multiple variants?

The most common solution is to use some kind of templating, usually Go templates along with Helm. Helm comes with many additional drawbacks and advantages, but we're going to ignore those for today. We're going to focus on the problems with templating.

An Alternative API

If used by enough people, you arrive at parameterizing every value in a template. At this point, you've provided an alternative API schema that contains an out-of-date subset of the full API [1].

Here's a bit of the Datadog Helm chart while you ponder that:

{{- template "check-version" . }}
{{- if .Values.agents.enabled }}
{{- if (or (.Values.datadog.apiKeyExistingSecret) (.Values.datadog.apiKey)) }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: {{ template "datadog.fullname" . }}
  labels:
    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
    app.kubernetes.io/name: "{{ template "datadog.fullname" . }}"
    app.kubernetes.io/instance: {{ .Release.Name | quote }}
    app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
    app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
spec:
  selector:
    matchLabels:
      app: {{ template "datadog.fullname" . }}
        {{- if .Values.agents.podLabels }}
{{ toYaml .Values.agents.podLabels | indent 6 }}
        {{- end }}
  template:
    metadata:
      labels:
        app: {{ template "datadog.fullname" . }}
        {{- if .Values.agents.podLabels }}
{{ toYaml .Values.agents.podLabels | indent 8 }}
        {{- end }}
      name: {{ template "datadog.fullname" . }}
      annotations:
        checksum/autoconf-config: {{ tpl (toYaml .Values.datadog.autoconf) . | sha256sum }}
        checksum/confd-config: {{ tpl (toYaml .Values.datadog.confd) . | sha256sum }}
        checksum/checksd-config: {{ tpl (toYaml .Values.datadog.checksd) . | sha256sum }}
        {{- if .Values.agents.customAgentConfig }}
        checksum/agent-config: {{ tpl (toYaml .Values.agents.customAgentConfig) . | sha256sum }}
        {{- end }}
        {{- if .Values.datadog.systemProbe.enabled }}
        container.apparmor.security.beta.kubernetes.io/system-probe: {{ .Values.datadog.systemProbe.apparmor }}
        container.seccomp.security.alpha.kubernetes.io/system-probe: {{ .Values.datadog.systemProbe.seccomp }}
        {{- end }}
      {{- if .Values.agents.podAnnotations }}
{{ toYaml .Values.agents.podAnnotations | indent 8 }}
      {{- end }}

To be fair the above is not the API provided by the Helm chart (that would be the values.yaml), but on every Helm chart I've worked on I've had to dive into the templates. And a template plus its values is difficult to reason about, more difficult than the manifests themselves.

Tooling

Bad tooling is a corollary of providing an alternative API.

Most tooling will be built off of the original API, and so the community has to build plugins (e.g. helm-kubeval).

I think the community has actually done a very good job of this, and I would credit that to Helm's usefulness as a package manager. But why is it a templating tool and a package manager? And do we need a package manager for deploying our bespoke (in-house) applications?

Overlays

Overlays are a way to accomplish our original goal — producing variants — without templating.

You have a base, an overlay, and the variant. The base and the variant are in the same language, and the overlay operation should be crystal clear. Probably more easily understood through an example.

Imagine some card game, with rounds. You start with this hand (the base).

adsf

In 1 round, you experience the following change (the overlay):

adsf

Your resulting hand would be this:

adsf

Both the initial hand and the resulting hand — the base and the variant — are clear to anybody who gets cards. At no point is there a poorly-maintained template.

This idea, of having the input and the variant be in the same language, is the core of declarative application management (DAM). As the white paper on this topic states, the ideal tool should contain the ability to instantiate multiple variants while exposing and teaching the Kubernetes APIs [1]. Templating does allow for variants, but it obfuscates the APIs.

Kustomize as an implementation of DAM

The tool built from the principles in the Declarative App Management white paper is Kustomize. In Kustomize you define a base — a set of Kubernetes resources. You then apply one or more overlays to customize the base in ways that you'd obviously want to in Kubernetes, like add resources or labels or annotations. You produce a variant, usually ready for a specific environment.

They key thing that makes this a great tool is that the input is crystal clear (it's just normal Kubernetes manifests). Overlays are similarly easy to understand. The whole process is well-scoped and easily understandable—the tool is not doing more than it should.

Example

Here's my own adapted version of Kustomize's hello world, after playing around with it for a while. It's a really fun tool to play around with.

Prerequisites

Install kustomize and kubectl. You can also optionally run a Kubernetes cluster.

$ brew install kustomize
$ brew install kubectl

$ kubectl config current-context
gke_atomic-commits_us-central1-c_my-first-cluster-1

Clone the repo

$ git clone <https://github.com/alexhwoods/kustomize-example.git>
...

$ kustomize build base

And you should see the full base kustomization. A kustomization is a kustomization.yaml file, or a directory containing one of those files. You can think of it as a general set of Kubernetes resources, with some extra metadata.

apiVersion: v1
data:
  altGreeting: Good Morning!
  enableRisky: "false"
kind: ConfigMap
metadata:
  labels:
    app: hello
  name: monopole-map
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello
  name: monopole
spec:
  ports:
    - port: 80
      protocol: TCP
      targetPort: 8080
  selector:
    app: hello
    deployment: hello
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: hello
  name: monopole
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello
      deployment: hello
  template:
    metadata:
      labels:
        app: hello
        deployment: hello
    spec:
      containers:
        - command:
            - /hello
            - --port=8080
            - --enableRiskyFeature=$(ENABLE_RISKY)
          env:
            - name: ALT_GREETING
              valueFrom:
                configMapKeyRef:
                  key: altGreeting
                  name: monopole-map
            - name: ENABLE_RISKY
              valueFrom:
                configMapKeyRef:
                  key: enableRisky
                  name: monopole-map
          image: monopole/hello:1
          name: monopole
          ports:
            - containerPort: 8080

We have a ConfigMap, a Service of type LoadBalancer, and a Deployment. Let's deploy just the base, and see what we get.

# create a namespace to mess around in
$ kubectl create ns kustomize-example-only-the-base

# apply all resources in the kustomization to the cluster
$ kustomize build base | kubectl apply -n kustomize-example-only-the-base -f -
configmap/monopole-map created
service/monopole created
deployment.apps/monopole created

# is everything there?
$ kubectl get all -n kustomize-example-only-the-base
NAME                            READY   STATUS    RESTARTS   AGE
pod/monopole-647458c669-hhkhr   1/1     Running   0          11m
pod/monopole-647458c669-k6d4c   1/1     Running   0          11m
pod/monopole-647458c669-zmcxg   1/1     Running   0          11m

NAME               TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
service/monopole   LoadBalancer   10.105.8.138   34.72.203.252   80:32142/TCP   11m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/monopole   3/3     3            3           11m

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/monopole-647458c669   3         3         3       11m

Now if I go to the load balancer's external IP, I see a simple website.

adsf

Overlays

Now let's add a very simple overlay. An overlay (in Kustomize) is a kustomization that depends on another kustomization.

.
├── base
│   ├── configMap.yaml
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
└── dev
└── kustomization.yaml

Here is overlays/dev/kustomization.yaml

resources:
  - ../../base
commonLabels:
  variant: dev

Let's look at the diff between the base and the overlay

$ diff <(kustomize build base) <(kustomize build overlays/dev)
8a9
>     variant: dev
15a17
>     variant: dev
24a27
>     variant: dev
31a35
>     variant: dev
38a43
>       variant: dev
43a49
>         variant: dev

That's it—we just replaced the variant label with the value dev. Now let's use it to actually make a change to our app. We'll build another overlay, for the staging environment. Here's our kustomization.yaml.

namePrefix: staging-
commonLabels:
  variant: staging
resources:
  - ../../base
patchesStrategicMerge:
  - map.yaml

And the new ConfigMap that we're merging with the one in the base:

apiVersion: v1
kind: ConfigMap
metadata:
  name: monopole-map
data:
  altGreeting: "Now we're in staging 😏"
  enableRisky: "true" # italics! really risky!

We can see the effects on our website

adsf

What else can we do with a kustomization?

Here's a non-exhaustive list:

  • Target multiple resource sources to form the base
  • Add common labels or annotations
  • Use CRDs
  • Modify the image or tag of a resource
  • Prepend or append values to the names of all resources and references
  • Add namespaces to all resources
  • Patch resources
  • Change the number of replicas for a resource
  • Generate Secrets and ConfigMaps, and control the behaviour of ConfigMap and Secret generators
  • Substitute name references (e.g. an environment variable)

The more I think about this, the more obvious and appealing the idea is. Although I'm sure I have many of these experiences in front of me:

template:
    metadata:
      labels:
        app: {{ template "datadog.fullname" . }}-cluster-agent
        {{- if .Values.clusterAgent.podLabels }}
{{ toYaml .Values.clusterAgent.podLabels | indent 8 }}
        {{- end }}
      name: {{ template "datadog.fullname" . }}-cluster-agent
      annotations:
        {{- if .Values.clusterAgent.datadog_cluster_yaml }}
        checksum/clusteragent-config: {{ tpl (toYaml .Values.clusterAgent.datadog_cluster_yaml) . | sha256sum }}
        {{- end }}
        {{- if .Values.clusterAgent.confd }}
        checksum/confd-config: {{ tpl (toYaml .Values.clusterAgent.confd) . | sha256sum }}
        {{- end }}
        ad.datadoghq.com/cluster-agent.check_names: '["prometheus"]'
        ad.datadoghq.com/cluster-agent.init_configs: '[{}]'
        ad.datadoghq.com/cluster-agent.instances: |
          [{
            "prometheus_url": "http://%%host%%:5000/metrics",
            "namespace": "datadog.cluster_agent",
            "metrics": [
              "go_goroutines", "go_memstats_*", "process_*",
              "api_requests",
              "datadog_requests", "external_metrics", "rate_limit_queries_*",
              "cluster_checks_*"
            ]
          }]
      {{- if .Values.clusterAgent.podAnnotations }}
{{ toYaml .Values.clusterAgent.podAnnotations | indent 8 }}
      {{- end }}

Sources

  1. Declarative App Management
  2. Kustomize docs

Want to know when I write a new article?

Get new posts in your inbox