Skip to content

LDAP

LDAP is a technology that stores user, password, and other generic information generally used for authentication and authorisation.

The L stands for Lightweight, and it is compared to it's predecessor X.500, but it's by no means simple. Let's use Kubernetes to abstract away some of the complexity so we can have our own directory to base our authentication on, from system account in Linux via SSSD, to backing our OAuth2 server to configuring VLANS and 802.1X network authenticaton.

OpenLDAP

OpenLDAP is the OG of LDAP servers, it's open-source and incredibly configurable to suit any use case you may have. That also means it's complex! Thankfully the folks at Bitnami have created a great image that handles the initialisation, tne multi-master repication, and other bits to just get it working. There's a Helm chart using this image that makes deploying into k8s a simple task.

Helm chart

The Helm chart we'll use lives on github. It also bundles phpLDAPAdmin and LTB-Toolbox for managing passwords if you would like to use those.

If you have followed us from the beginning, you probably already have the repo added, but if not let's add the repository now.

helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
helm repo update

From there, we can set a small number of parameters and get a highly-available, multi-master instance of OpenLDAP.

Generate credentials

First, we should generate our own passwords. The Helm chart will insert these if we don't provide them, but default credentials are never a good thing.

On OSX you can do the following;

LDAP_ADMIN_PASSWORD=`head -c 32 /dev/urandom | shasum -a 256 | base64 | head -c 32 | base64`
LDAP_CONFIG_ADMIN_PASSWORD=`head -c 32 /dev/urandom | shasum -a 256 | base64 | head -c 32 | base64`

Feel free to use you generator of choice if you have one though.

Then create a secret with your generated credentials

cat <<EOF | kubectl apply -f -
apiVersion: v1
metadata:
    name: ldappasswords
    namespace: ldap
data:
  LDAP_ADMIN_PASSWORD: ${LDAP_ADMIN_PASSWORD}
  LDAP_CONFIG_ADMIN_PASSWORD: ${LDAP_CONFIG_ADMIN_PASSWORD}
kind: Secret
type: Opaque
EOF

Install via Helm

Here's a good set of values to go on with, making sure that the service annotation ip address is correct for your setup.

As always click the for more information.

helm upgrade \
  --install ldap helm-openldap/openldap-stack-ha \
  --create-namespace -n ldap \
  --set global.ldapDomain=mydomain.com (1) \
  --set global.existingSecret=ldappasswords (2) \
  --set service.enabled=true \
  --set service.single=true \
  --set service.type=LoadBalancer (3) \
  --set service.externalTrafficPolicy=Local (4) \
  --set service.annotations."io\.cilium\/lb-ipam-ips"="192.168.249.199" (5) \
  --set persistence.storageClass="ceph-block" \
  --set ltb-passwd.enabled=false (6) \
  --set phpldapadmin.enabled=true (7) \
  --set initTLSSecret.tls_enabled=true (8) \
  --set initTLSSecret.secret=my-tls-secret (9) \
  1. If you don't what this is for, set to your organisation's domain name
  2. This is the k8s secret we created in the previous step to hold our credentials
  3. Create a LoadBalancer service, so we can announce via BGP
  4. This is a bit of k8s Service object voodoo, it essentially means don't proxy access to this service between cluster nodes, only accept on the nodes running the service. This in essence get's us the real direct end-client IP address to the service.
  5. This is the address we'll use external to kubernetes to access LDAP
  6. Don't instantiate an instance of the LDAP Toolbox Password manager, we'll use our own in a future step.
  7. Do instantiate an instance of phLDAPAdmin, this won't be accessible outside the cluster, except on-demand as we'll show below.
  8. Enable the TLS generator container
  9. this is the name of the secret to put our generated key + certificate.

Examine status

Check that one instance of phpLDAPadmin, and three instances of OpenLDAP are up:

kubectl -n ldap get pods

You should see similar output to the following:

NAME                                 READY   STATUS    RESTARTS      AGE
ldap-0                               1/1     Running   0             19m
ldap-1                               1/1     Running   1 (18m ago)   19m
ldap-2                               1/1     Running   1 (17m ago)   18m
ldap-phpldapadmin-75bf6c4d8d-v4xrq   1/1     Running   0             19m

Check that we can connect with standard LDAP tooling:

LDAP_ADMIN_PASSWORD=`kubectl get secret --namespace ldap ldappasswords -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 --decode; echo`
ldapsearch -x -H ldap://192.168.249.199 -b dc=mydomain,dc=com -D "cn=admin,dc=mydomain,dc=com" -w ${LDAP_ADMIN_PASSWORD} -Z

This should dump out all of the current entries in LDAP similar to the following:

# extended LDIF
#
# LDAPv3
# base <dc=mydomain,dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# mydomain.com
dn: dc=mydomain,dc=com
objectClass: dcObject
objectClass: organization
dc: mydomain
o: example

# users, mydomain.com
dn: ou=users,dc=mydomain,dc=ca
objectClass: organizationalUnit
ou: users

# user01, users, mydomain.com
dn: cn=user01,ou=users,dc=mydomain,dc=com
cn: User1
cn: user01
sn: Bar1
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword:: Yml0bmFtaTE=
uid: user01
uidNumber: 1000
gidNumber: 1000
homeDirectory: /home/user01

# user02, users, mydomain.com
dn: cn=user02,ou=users,dc=mydomain,dc=com
cn: User2
cn: user02
sn: Bar2
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword:: Yml0bmFtaTI=
uid: user02
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/user02

# readers, users, mydomain.com
dn: cn=readers,ou=users,dc=mydomain,dc=com
cn: readers
objectClass: groupOfNames
member: cn=user01,ou=users,dc=mydomain,dc=com
member: cn=user02,ou=users,dc=mydomain,dc=com

# search result
search: 3
result: 0 Success

# numResponses: 6
# numEntries: 5

That shows that we connected, searched (and found the default entries from the helm chart) but that's working as expected so far.

Directory structure, and adding entries is not something we'll cover in this tutorial, but see FIXME for more information.

New concept, kubectl port-forward

We also installed phpLDAPadmin, but how do we connect to that ? You might think the same service ip address we configured could be used, but let's examine the service and figure out why that's not the case:

kubectl -n ldap get svc/ldap -o jsonpath="{.spec.selector}" | jq .

Output will look like:

{
  "app.kubernetes.io/component": "ldap",
  "release": "ldap"
}

That tells us the LoadBalancer service in question only applies to pods with the lables above. And looking at our phpldapadmin pod:

kubectl -n ldap get pods/ldap-phpldapadmin-75bf6c4d8d-v4xrq -o jsonpath="{.metadata.labels}" | jq .

Output will look like:

{
  "app": "phpldapadmin",
  "pod-template-hash": "75bf6c4d8d",
  "release": "ldap"
}

We don't see the same component. However, listing all services created by the chart leaves an obvious clue:

kubectl -n ldap get svc

Output will look like:

NAME                TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                       AGE
ldap                LoadBalancer   10.109.161.210   192.168.249.199   389:32616/TCP,636:30477/TCP   48m
ldap-headless       ClusterIP      None             <none>            389/TCP                       48m
ldap-phpldapadmin   ClusterIP      10.107.170.255   <none>            80/TCP                        48m

Cool. So how do we access that ? In an ideal world we would already have what's called an ingress controller, (or the newer, hotter version gateway api controller) installed and could add an HTTP route into that service. For now though we can use kubectl port-forward.

This is a userland proxy into k8s services. In this case, we want to proxy a port on our localhost into the ldap-phpadmin service like so:

kubectl -n ldap port-forward service/ldap-phpldapadmin 8888:80

You'll see the following if successful:

Forwarding from 127.0.0.1:8888 -> 80
Forwarding from [::1]:8888 -> 80

Our service is now externally accessibly through localhost port 8888.

So open up a browser to http://localhost:8888 and use phpLDAPadmin to your hearts content in a way that ensures it is not accessible to the general public and just quit the port-forward using ^C when finished.

Bonus points

True high availability

Just having three instances of OpenLDAP running does not mean it's highly available, for that we need to turn to another builtin Kubernetes feature, PodAffinity (or PodAntiAffinity) in this case. We could also use PodTopologySpread to ensure that our instances are not all scheduled on the same physical host.

For even more options, see PodDisruptionBudget which is supported by the chart.

Add the following to a helm values file:

affinity: |
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/instance: "{{ .Release.Name }}"
            component: server
        topologyKey: kubernetes.io/hostname
and add it to your Helm install command line arguments with the following line:

-f values.yaml

If the chart supported, we could define something like the following:

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        app.kubernetes.io/instance=ldap

This would ensure that no more than one pod gets scheduled per unique hostname.

Install your own certificate

As we've seen above, LDAP is running and available via STARTTLS ensuring our connections are encrypted. The chart however generates a self-signed certificate which is less than ideal, and we can do better!

Assuming you have a certificate and private key ready:

First, create a Kubernetes secret of tls type:

kubectl create secret tls my-tls-secret \
  --cert=path/to/cert/file \
  --key=path/to/key/file

then change the Helm install lines to point at your newly created certificate:

  --set initTLSSecret.secret=my-tls-secret \

Apache Directory Studio

Apache Directory Studio (ADS) is one of the best GUIs for managing LDAP around, but it hasn't seen a release in some time. Current release only support up to TLS 1.2, and our deployment defaults to using TLS 1.3. So ADS will not be able to negotiate a secure connection. OpenLDAP and this chart have no mechanism to configure, or limit the TLS version, so we have two options if we want to use ADS.

  1. Allow insecure connections.

    This is bad, and will expose credentials plain-text across the network, and really not worth it in the long run.

  2. Configure the container system's openssl to limit the TLS version to 1.2.

    Only slightly more complicated than our default install, and gives us the warm fuzzies that TLS 1.2 still is capable of providing, security-wise.

The important parts of a system-wide openssl.cnf are as follows:

[default_conf]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
MinProtocol = TLSv1.2
MaxProtocol = TLSv1.2
CipherString = DEFAULT@SECLEVEL=2

This defines TLSv1.2 as both the minimum and maximum protocol we support on the whole system. Write that out to a file, and create a kubernetes configmap from it to mount in our ldap containers:

cat >>openssl.cnf<<EOF
[default_conf]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
MinProtocol = TLSv1.2
MaxProtocol = TLSv1.2
CipherString = DEFAULT@SECLEVEL=2
EOF

kubectl create configmap opensslcnf --from-file=openssl.cnf

Now add the following to your Helm values yaml:

extraVolumes:
  - name: "opensslcnf"
    configMap:
      name: "opensslcnf"

extraVolumeMounts:
  - name: "opensslcnf"
    mountPath: "/etc/ssl/openssl.cnf"
    subPath: "openssl.cnf"

Or you can set on the command-line by converting to appropriate --set commands.

Network policy

Given that LDAP stores all of our authentication information it's important to restrict connectivty as much as possible, and we can do this through our CiliumNetworkPolicy capability.