Skip to content

Authoritative DNS

DNS is one of those things you can't really do without, unless you find remembering the phonebook fun and enjoyable. Let's setup an authoritative DNS server in our cluster using ISC's Bind9 software, the definitive DNS server.

Install the Helm chat

Except.. there isn't one. No official one anyhow, and not even many unofficial ones. I think it's due to the type of people that like containers, and the type of people that like authoritative DNS are a not very much overlapping venn diagram. In addition to no official Helm chart, the ISC provided container image is a bit old, and not very well maintained. We'll use the Ubuntu maintained ubuntu/bind9 image instead.

Things we'll need to configure:

  1. ConfigMap to hold our BIND configuration.
  2. Secret to hold sensitive data.
  3. PersistentVolumeClaim to hold our zone files and other stateful bits.
  4. Deployment to manage the lifecycle of our pod, or pods.
  5. Service with external reachability via BGP announcements.

BIND9 configuration

BIND9 has been around a very long time, and correspondingly has a million diffierent configuration options which you can find at the BIND9 reference.

We'll use the zone home.arpa as per RFC8375.

BIND server configuration

We're not here to tell you what options are good or bad, just to mock up a suitably bare configuration to get you going, and that looks like this:

options {
  directory "/var/cache/bind";
  dnssec-validation auto;
};

zone "." {
  type hint;
  file "/usr/share/dns/root.hints";
};

zone "localhost" {
  type master;
  file "/etc/bind/db.local";
};

zone "127.in-addr.arpa" {
  type master;
  file "/etc/bind/db.127";
};

zone "0.in-addr.arpa" {
  type master;
  file "/etc/bind/db.0";
};

zone "255.in-addr.arpa" {
  type master;
  file "/etc/bind/db.255";
};

zone "home.arpa" {
  type primary;
  file "/var/zones/home.arpa.zone";
  notify no;
  file "/etc/bind/db.255";
};

TSIG key and secrets

Since we don't want to exec into our container and hand-edit zone files, let's create a TSIG key and update policy to allow us to update our zone and records via RFC2136 dynamic updates.

If you're on OSX, it includes the necessary utility; tsig-keygen:

tsig-keygen

The output should be similar to:

key "tsig-key" {
    algorithm hmac-sha256;
    secret "99xrm1oKgHZnOqgiF2O1EMcn15ZRBbf2QTT22eSv0A0=";
};

The TSIG key is secret material that can change your zone file, so we should create a secret to hold it rather than directly in our config file.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: home-arpa-tsig
stringData:
  home-arpa.key: |
    key "tsig-key" {
        algorithm hmac-sha256;
        secret "99xrm1oKgHZnOqgiF2O1EMcn15ZRBbf2QTT22eSv0A0=";
    };
EOF

Now to include it in our config file, and amend our home.arpa zone to use it in an update policy, and while we're there we might as well allow it to transfer the zone as well.

include "/etc/bind/home-arpa.key";

zone "home.arpa" {
  type primary;
  file "/var/zones/home.arpa.zone";
  notify no;
  update-policy { grant tsig-key zonesub ANY; };
  allow-transfer { key "tsig-key"; };
};

Combined full config

Create a Kubernetes ConfigMap to hold this for us:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: named-conf
data:
  named.conf: |
    options {
      directory "/var/cache/bind";
      dnssec-validation auto;
    };

    include "/etc/bind/home-arpa.key";

    zone "." {
      type hint;
      file "/usr/share/dns/root.hints";
    };

    zone "localhost" {
      type master;
      file "/etc/bind/db.local";
    };

    zone "127.in-addr.arpa" {
      type master;
      file "/etc/bind/db.127";
    };

    zone "0.in-addr.arpa" {
      type master;
      file "/etc/bind/db.0";
    };

    zone "255.in-addr.arpa" {
      type master;
      file "/etc/bind/db.255";
    };

    zone "home.arpa" {
      type primary;
      file "/var/zones/home.arpa.zone";
      notify no;
      update-policy { grant tsig-key zonesub ANY; };
      allow-transfer { key "tsig-key"; };
    };
EOF

Stateful data storage

Create a PVCs to hold our zone data

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: named-zones
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: ceph-block
EOF

Pod lifecycle

Kubernetes can instantiate a bare Pod, but this has no controller ensuring it stays up and running. We'll create a Kubernetes Deployment, which will manage a StatefulSet

Deployments are the essence of Kubernetes really, we describe a running state, an the Deployment ensure it stays that way, from number of replicas to disruption budgets to topology spread constraints.

Create a Deployment configuration for our use like the following:

cat <<EOF | kubectl apply -f -
kind: Deployment
apiVersion: apps/v1
metadata:
  name: bind9-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bind9
  template:
    metadata:
      labels:
        app: bind9
    spec:
      containers:
        - name: bind9
          image: ubuntu/bind9
          resources:
            requests:
              cpu: "10m"
              memory: "100Mi"
            limits:
              memory: "100Mi"
          volumeMounts:
          - name: named-conf
            mountPath: /etc/bind/named.conf
            subPath: named.conf
          - mountPath: /var/zones
            name: named-zones
          - name: home-arpa-tsig
            mountPath: /etc/bind/home-arpa.key
            subPath: home-arpa.key
          ports:
          - name: bind9
            containerPort: 53
      volumes:
      - name: named-conf
        configMap:
          name: named-conf
      - name: home-arpa-tsig
        secret:
          secretName: home-arpa-tsig
      - name: named-zones
        persistentVolumeClaim:
          claimName: named-zones
      securityContext:
        fsGroup: 100
EOF

Create the home.arpa zone

BIND9 is a bit silly, and you have to manually create the zone file on disk before it will load it, so exec into the container and create the file, killing the pod when finished to restart the daemon.

Here's a one shot to create a basic zone file:

kubectl exec -it \
  $(kubectl get pod -l "app=bind9" -o jsonpath='{.items[0].metadata.name}') \
  -- bash -c "chown bind:bind /var/zones; cat >/var/zones/home.arpa.zone<<EOF
\\\$TTL 300
@                IN SOA   127.0.0.1. nobody.home.arpa. 2023121400 14400 3600 604800 3600
                 IN NS    localhost.
loopback         IN A     127.0.0.1
EOF"

Now use rndc to reload bind inside the pod

kubectl exec $(kubectl get pod -l "app=bind9" -o jsonpath='{.items[0].metadata.name}') -- rndc reload

Make sure you check the pod logs now, as BIND9 is very picky and escaping things across shells is hard!

Accessing BIND9

Now we need to create a service so we can get to it, and since this is not HTTP we'll use a service IP to get direct TCP/UDP access.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: bind9
  annotations:
    io.cilium/lb-ipam-ips: 192.168.249.53
spec:
  type: LoadBalancer
  loadBalancerClass: io.cilium/bgp-control-plane
  externalTrafficPolicy: Local
  ports:
    - protocol: TCP
      name: dns-tcp
      port: 53
      targetPort: 53
    - protocol: UDP
      name: dns-udp
      port: 53
      targetPort: 53
  selector:
    app: bind9
EOF

Test dynamic updates

Let's try updating a record, again OSX includes the necessary tools, assuming you saved the tsig we created in a file named key.txt

nsupdate -k key.txt <<EOF
server 192.168.249.53
zone
home.arpa
update add test.home.arpa. 300 A 127.0.0.1
send
EOF

The pod logs should say something similar to:

tsig-key: updating zone 'home.arpa/IN': adding an RR at 'test.home.arpa' A 127.0.0.1

We should also be able to transfer the zone like so:

dig @192.168.249.53 home.arpa axfr -k key.txt

And that should dump out the entire zone file similar to the below output:

; <<>> DiG 9.10.6 <<>> @192.168.249.53 home.arpa axfr -k key.txt
; (1 server found)
;; global options: +cmd
home.arpa.      300 IN  SOA 127.0.0.1. nobody.home.arpa. 2023121402 14400 3600 604800 3600
home.arpa.      300 IN  NS  localhost.
test.home.arpa.     300 IN  A   127.0.0.1
loopback.home.arpa. 300 IN  A   127.0.0.1
home.arpa.      300 IN  SOA 127.0.0.1. nobody.home.arpa. 2023121402 14400 3600 604800 3600
tsig-key.       0   ANY TSIG    hmac-sha256. 1710236936 300 32 6xFmiSvUkBFJF8YYYYEd47DED+azKp827omUZYgOgV0= 45069 NOERROR 0
;; Query time: 5 msec
;; SERVER: 192.168.249.53#53(192.168.249.53)
;; WHEN: Tue Mar 12 22:48:56 NZDT 2024
;; XFR size: 6 records (messages 1, bytes 297)

Wrapping it all up

Voila! You've got an authoritative DNS server running as a native Kubernetes Deployment.

  • ConfigMap to hold our BIND configuration.
  • Secret object to hold sensitive data.
  • PersistentVolumeClaim to hold our zone files and other stateful bits.
  • Deployment to manage the lifecycle of our pod, or pods.
  • Service with external reachability via BGP announcements.