Skip to content

k0sctl and k0s

Overview

k0s is an open source, Kubernetes distribtion based on vanilla upstream and very little else, meaning it won't get in our way much at all but still be 100% upstream compatible.

k0sctl is a tool to install, configure, and manage the necessary k0s binaries and services on a number of hosts, including lifecycle management, high availability and everything inbetween.

Both tools come from Mirantis, a company with a long history in containers and Kubernetes.

Create a configuration file

k0sctl needs a basic configuration file, and once provided will do all the heavy lifting. A thorough example is provided below, all that's necessary is to subsitute your host information.

Let's examine a complete configuration. Click the for more information.

---
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts: (1)
  - openSSH: (2)
      address: 192.168.1.51
      user: core
      port: 22
    # this means a converged node, both control plane and worker
    role: controller+worker (3)
    # needed to schedule workloads on this node, even those it's marked as a
    # worker
    noTaints: true (4)
    # we don't need konnectivity, and it has a problem with CPU usage
    # https://github.com/k0sproject/k0s/issues/2068
    installFlags:
      - --disable-components konnectivity-server (5)
  - openSSH:
      address: 192.168.1.52
      user: core
      port: 22
    role: controller+worker
    noTaints: true
    installFlags:
      - --disable-components konnectivity-server
  - openSSH:
      address: 192.168.1.53
      user: core
      port: 22
    role: controller+worker
    noTaints: true
    installFlags:
      - --disable-components konnectivity-server
  k0s:
    # use the latest stable config
    versionChannel: stable (6)
    # use k0sctl for configuration rather than k0s on each node
    dynamicConfig: false (7)
    config:
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: Cluster
      metadata:
        name: mgmt-cluster
      spec:
        api:
          k0sApiPort: 9443
          port: 6443
          extraArgs:
            encryption-provider-config: /etc/kubernetes-enc.yaml (8)
            oidc-issuer-url: https://auth.host.mydomain (9)
            oidc-client-id: k0s
            oidc-username-claim: email
            oidc-groups-claim: groups
        installConfig:
          users:
            etcdUser: etcd
            kineUser: kube-apiserver
            konnectivityUser: konnectivity-server
            kubeAPIserverUser: kube-apiserver
            kubeSchedulerUser: kube-scheduler
        konnectivity:
          adminPort: 8133
          agentPort: 8132
        network:
          dualStack: (10)
            enabled: true
            IPv6podCIDR: "fc00::/108"
            IPv6serviceCIDR: "fc00::/108"
          kubeProxy:
            disabled: true (11)
          provider: custom (12)
        podSecurityPolicy:
          defaultPolicy: 00-k0s-privileged
        storage:
          type: etcd (13)
        telemetry:
          enabled: false (14)
  1. You'll need one host entry to cluster member.
  2. Use openssh rather than builtin ssh, meaning your ssh_config, keys and everything are used. host, port and user should be obvious.
  3. Our cluster has converged nodes, each node doing orchestration and workloads
  4. Normally, kubernetes controller nodes are tainted to not run workloads, so we need to remove that.
  5. We don't need konnectivity-server, and it has some bugs still github
  6. The k0s release channel to track.
  7. Use k0sctl to configure the nodes on-going, otherwise it will do a one-shot config and we'll need to go to each node in the future.
  8. See our previous section on encryption-at-rest, here is where it is applied
  9. This allows the kubernetes apiserver to use an external OIDC provider
  10. All three are needed for IPv6, and IPv6 is good!
  11. We do not use a builtin network interface, we install Cilium a little further along.
  12. We do not use a builtin network interface, we install Cilium a little further along.
  13. k0sctl will setup a multi-node etcd cluster for us.
  14. don't send outbound telemetry to mirantis.

Now that we're happy, and that all makes sense let's create the cluster.

Create a cluster

We just need to run one command and we'll end up with a functional1 cluster in a few short minutes.

k0sctl apply

⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███          ███    ███
⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███          ███    ███
⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███          ███    ███
⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████    ███    ██████████
k0sctl v0.17.4 Copyright 2023, k0sctl authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
INFO ==> Running phase: Set k0s version
INFO Looking up latest stable k0s version
INFO Using k0s version v1.29.2+k0s.0
INFO ==> Running phase: Connect to hosts
ControlSocket /Users/dene/.ssh/ctrl-0b1a1c28f2ae1f5c32befe4c92b6e88ca6324fd1 already exists, disabling multiplexing
ControlSocket /Users/dene/.ssh/ctrl-7d5099292b9030a774b840c68148b88a00f4703e already exists, disabling multiplexing
ControlSocket /Users/dene/.ssh/ctrl-cec71d908e574567f6108a39fcd7a49bb5f413f0 already exists, disabling multiplexing
INFO [OpenSSH] core@192.168.1.51:22: connected
INFO [OpenSSH] core@192.168.1.52:22: connected
INFO [OpenSSH] core@192.168.1.53:22: connected
INFO ==> Running phase: Detect host operating systems
INFO [OpenSSH] core@192.168.1.52:22: is running Flatcar Container Linux by Kinvolk 3815.2.0 (Oklo)
INFO [OpenSSH] core@192.168.1.53:22: is running Flatcar Container Linux by Kinvolk 3815.2.0 (Oklo)
INFO [OpenSSH] core@192.168.1.51:22: is running Flatcar Container Linux by Kinvolk 3815.2.0 (Oklo)
INFO ==> Running phase: Acquire exclusive host lock
INFO ==> Running phase: Prepare hosts
INFO ==> Running phase: Gather host facts
INFO [OpenSSH] core@192.168.1.52:22: using cn2 as hostname
INFO [OpenSSH] core@192.168.1.51:22: using cn1 as hostname
INFO [OpenSSH] core@192.168.1.53:22: using cn3 as hostname
INFO ==> Running phase: Validate hosts
INFO ==> Running phase: Gather k0s facts
INFO ==> Running phase: Validate facts
INFO [OpenSSH] core@192.168.1.51:22: validating configuration
INFO [OpenSSH] core@192.168.1.52:22: validating configuration
INFO [OpenSSH] core@192.168.1.53:22: validating configuration
INFO ==> Running phase: Configure k0s
INFO [OpenSSH] core@192.168.1.52:22: installing new configuration
INFO [OpenSSH] core@192.168.1.51:22: installing new configuration
INFO [OpenSSH] core@192.168.1.53:22: installing new configuration
INFO ==> Running phase: Initialize the k0s cluster
INFO [OpenSSH] core@192.168.1.51:22: installing k0s controller
INFO [OpenSSH] core@192.168.1.51:22: waiting for the k0s service to start
INFO [OpenSSH] core@192.168.1.51:22: waiting for kubernetes api to respond
INFO ==> Running phase: Install controllers
INFO [OpenSSH] core@192.168.1.52:22: validating api connection to https://192.168.1.51:6443
INFO [OpenSSH] core@192.168.1.53:22: validating api connection to https://192.168.1.51:6443
INFO [OpenSSH] core@192.168.1.51:22: generating token
INFO [OpenSSH] core@192.168.1.52:22: writing join token
INFO [OpenSSH] core@192.168.1.52:22: installing k0s controller
INFO [OpenSSH] core@192.168.1.52:22: starting service
INFO [OpenSSH] core@192.168.1.52:22: waiting for the k0s service to start
INFO [OpenSSH] core@192.168.1.52:22: waiting for kubernetes api to respond
INFO [OpenSSH] core@192.168.1.51:22: generating token
INFO [OpenSSH] core@192.168.1.53:22: writing join token
INFO [OpenSSH] core@192.168.1.53:22: installing k0s controller
INFO [OpenSSH] core@192.168.1.53:22: starting service
INFO [OpenSSH] core@192.168.1.53:22: waiting for the k0s service to start
INFO [OpenSSH] core@192.168.1.53:22: waiting for kubernetes api to respond
INFO ==> Running phase: Release exclusive host lock
INFO ==> Running phase: Disconnect from hosts
INFO ==> Finished in 56s
INFO k0s cluster version v1.29.2+k0s.0 is now installed
INFO Tip: To access the cluster you can now fetch the admin kubeconfig using:
INFO      k0sctl kubeconfig

Now to add the initially created admin token for our own use:

mkdir ~/.kube
k0sctl kubeconfig >~/.kube/config

and we're ready to move on.

See the asciicast below, and note this is real time, only taking 44 seconds to create a three node cluster:


  1. Functional once we install a CNI because we're opinionated and don't like the included CNI, calico.