Install a cluster networking interface (CNI)
Why Cilium
So why not use the k0s provided CNI, calico ? Well, a few reasons. Calico works great, and is performant in its eBPF mode.. Cilium has a few tricks up it's sleave still though:
- Network Policies: Cilium policies can include layer-7 logic directly. It makes k8s network policies super powered!
- Hubble: The vilisbility of eBPF powered CNIs with Hubble is incredible. Especially in a containerised world where it seems difficult to troubleshoot networking issues.
- BGP control plane: In a bare metal cluster, this is a must. Having the CNI announce your service addresses without resorting to another tool ties it all together, sorry metallb!
Prerequisites
Because we want to use the Cilium Gateway API provider (don't worry, we'll cover it later!) we need to make sure our cluster has the Gateway CRDS:
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml
Now we're ready to install our cluster networking.
Install Cilium via Helm
Before anything else, we also need some container networking, or CNI. We use Cilium here, so let's install that with helm. Make sure to substitute your ip address below. Click the for more information.
helm upgrade \
--install cilium cilium/cilium \
-n kube-system (1) \
--set encryption.enabled=true (2) \
--set encryption.type=wireguard (3) \
--set hubble.relay.enabled=true (4)\
--set hubble.ui.enabled=true (5)\
--set ipv6.enabled=true (6)\
--set kubeProxyReplacement=true (7)\
--set prometheus.enabled=true (8)\
--set bgpControlPlane.enabled=true (9)\
--set k8sServiceHost="${IP_ADDRESS_OF_FIRST_NODE}" (10)\
--set k8sServicePort="6443" (11)\
--set gatewayAPI.enabled=true (12)\
--set cluster.name="mgmt" (13)\
--set cluster.id="1" (14) \
--set l2announcements.enabled=true (15)\
--set externalIPs.enabled=true (16)\
--set loadBalancerIPs=true (17) \
--set devices='{eno1}' (18)\
--set k8sClientRateLimit.qps=10 (19)\
--set k8sClientRateLimit.burst=30 (20)
- Install Cilium to the kube-system namespace.
- Tell Cilium to encrypt traffic between pods.
- Tell Cilium to use Wireguard(tm) for encryption between pods.
- Enable Cilium's visibility engine, Hubble.
- Enable the UI for Hubble.
- Enable IPv6 as per our k0sctl.yaml.
- Remember, we told k0s not to setup kubeProxy for us.. here's where we tell Cilium to do that.
- Enable prometheus monitoring of Cilium
- Enable the BGP protocol for announcing our service ip addresses.
- We need to hardcode a node address here, since we have no CNI currently no pods know how to connect to anything
- We need to hardcode a node port here, since we have no CNI currently no pods know how to connect to anything
- This is a new Kubernetes feature intended to replace Ingress objects in the long run
- Set these now incase we want to use clustermesh features in the future.
- Set these now incase we want to use clustermesh features in the future.
- Enable layer-2 announcements
- Enable using .spec.externalIP
- Enables layer-2 announcements of all addresses in .status.loadbalancer.ingress
- Interface to enable layer-2 announcements on
- Rate-limit calls to the k8s API server
- Rate-limit calls to the k8s API server
Examine cluster state
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
cilium-8wvss 1/1 Running 0 4m40s
cilium-jltcw 1/1 Running 0 4m40s
cilium-operator-68889747c8-7jpnz 1/1 Running 0 4m40s
cilium-operator-68889747c8-sdmcq 1/1 Running 0 4m40s
cilium-wrfwl 1/1 Running 0 4m40s
coredns-555d98c87b-5z8vv 1/1 Running 0 32m
coredns-555d98c87b-lwmt6 1/1 Running 0 32m
hubble-relay-b54f7896-mcdv4 1/1 Running 0 4m40s
hubble-ui-6548d56557-jfqsf 2/2 Running 0 4m40s
metrics-server-7556957bb7-6xmkd 1/1 Running 0 33m
Notice there are nore more pods, clearly named after what we've just installed, Cilium. Even more importantly notice the other pods are now Running not ContainerCreating as before.
Checkout the cilium cli now:
cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: OK
\__/ ClusterMesh: disabled
Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
Containers: cilium Running: 3
hubble-ui Running: 1
hubble-relay Running: 1
cilium-operator Running: 2
Cluster Pods: 5/5 managed by Cilium
Helm chart version: 1.15.1
Image versions cilium quay.io/cilium/cilium:v1.15.1@sha256:351d6685dc6f6ffbcd5451043167cfa8842c6decf80d8c8e426a417c73fb56d4: 3
hubble-ui quay.io/cilium/hubble-ui:v0.13.0@sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666: 1
hubble-ui quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803: 1
hubble-relay quay.io/cilium/hubble-relay:v1.15.1@sha256:3254aaf85064bc1567e8ce01ad634b6dd269e91858c83be99e47e685d4bb8012: 1
cilium-operator quay.io/cilium/operator-generic:v1.15.1@sha256:819c7281f5a4f25ee1ce2ec4c76b6fbc69a660c68b7825e9580b1813833fa743: 2
It should look vageuly similar, with everything green across the board. You can also check other things, like the status of pod-to-pod encryption:
Which is exactly what we told it be. Yay! Let's hit the next section and get some BGP connectivity up and running so we can deploy, then access workloads in our cluster.
See the asciicast below: