Putting it all together
Before we get on with useful workloads, let's put something together that will demonstrate all of the things we've just built. Pods, deployments, network policies, BGP announcements, and more!
We'll create a basic who-am-i webservice, and a proxy service. A user will send a web request to the proxy service, and it will relay the request to our whoami service.
Create a namespace to hold each workload
Create the whoami deployment
cat <<EOF | kubectl apply -f -
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: whoami
name: whoami
labels:
app: whoami
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
resources:
requests:
cpu: "10m"
memory: "15Mi"
limits:
memory: "15Mi"
ports:
- name: web
containerPort: 80
EOF
Create the whoami service
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
annotations:
io.cilium/lb-ipam-ips: 192.168.249.149
name: who-svc
namespace: whoami
spec:
externalTrafficPolicy: Local
internalTrafficPolicy: Cluster
sessionAffinity: None
type: LoadBalancer
ports:
- protocol: TCP
name: http
port: 80
selector:
app: whoami
EOF
Create a proxy config, deployment, and service
For brevity, we'll collapse all of that into one-shot.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
namespace: proxy
data:
nginx.conf: |
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
proxy_cache_path /tmp levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
sendfile on;
keepalive_timeout 65;
server {
listen 80 default_server;
ignore_invalid_headers off;
client_max_body_size 100m;
proxy_buffering off;
server_name _;
location ~ /$ {
proxy_pass http://who-svc.whoami.svc.cluster.local;
}
location / {
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
proxy_pass http://who-svc.whoami.svc.cluster.local;
proxy_redirect off;
break;
}
}
include /etc/nginx/conf.d/*.conf;
}
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: proxy
name: proxy
labels:
app: proxy
spec:
replicas: 2
selector:
matchLabels:
app: proxy
template:
metadata:
labels:
app: proxy
spec:
containers:
- name: proxy
image: nginx:latest
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
resources:
requests:
cpu: "10m"
memory: "15Mi"
limits:
memory: "15Mi"
ports:
- name: web
containerPort: 80
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
---
apiVersion: v1
kind: Service
metadata:
annotations:
io.cilium/lb-ipam-ips: 192.168.249.243
name: proxy-svc
namespace: proxy
spec:
externalTrafficPolicy: Local
internalTrafficPolicy: Cluster
sessionAffinity: None
type: LoadBalancer
ports:
- protocol: TCP
name: http
port: 80
selector:
app: proxy
EOF
Examine current state
whoami
See the running pods, services, deployments and replicasets that we generated above. Note the external-ip of the who-svc.
kubectl -n whoami get all
NAME READY STATUS RESTARTS AGE
pod/whoami-6d9458855d-hx5cp 1/1 Running 0 2m1s
pod/whoami-6d9458855d-zjs69 1/1 Running 0 2m1s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/who-svc LoadBalancer 10.102.77.131 192.168.249.149 80:32518/TCP 80m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/whoami 2/2 2 2 2m1s
NAME DESIRED CURRENT READY AGE
replicaset.apps/whoami-6d9458855d 2 2 2 2m1s
proxy
See the running pods, services, deployments and replicasets that we generated above. Again, note the external-ip of the service.
kubectl -n proxy get all
NAME READY STATUS RESTARTS AGE
pod/proxy-78787986d7-hsswr 1/1 Running 0 6m46s
pod/proxy-78787986d7-wljgx 1/1 Running 0 6m46s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/proxy-svc LoadBalancer 10.99.84.46 192.168.249.243 80:31377/TCP 31m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/proxy 2/2 2 2 6m46s
NAME DESIRED CURRENT READY AGE
replicaset.apps/proxy-78787986d7 2 2 2 6m46s
BGP advertisements
We can see both of the EXTERNAL-IPs from our services announced via BGP:
cilium bgp routes advertised ipv4 unicast
Node VRouter Peer Prefix NextHop Age Attrs
cn1 45633 192.168.1.1 192.168.249.149/32 192.168.1.51 3m46s [{Origin: i} {AsPath: } {Nexthop: 192.168.1.51} {LocalPref: 100}]
45633 192.168.1.1 192.168.249.243/32 192.168.1.51 8m1s [{Origin: i} {AsPath: } {Nexthop: 192.168.1.51} {LocalPref: 100}]
cn2 45633 192.168.1.1 192.168.249.149/32 192.168.1.52 3m47s [{Origin: i} {AsPath: } {Nexthop: 192.168.1.52} {LocalPref: 100}]
45633 192.168.1.1 192.168.249.243/32 192.168.1.52 8m1s [{Origin: i} {AsPath: } {Nexthop: 192.168.1.52} {LocalPref: 100}]
Make some requests
Direct to the web service
curl http://192.168.249.149
ostname: whoami-6d9458855d-zjs69
IP: 127.0.0.1
IP: ::1
IP: 10.0.2.58
IP: fd00::244
IP: fe80::cc53:c7ff:fe9d:19fd
RemoteAddr: 192.168.1.115:54841
GET / HTTP/1.1
Host: 192.168.249.149
User-Agent: curl/8.4.0
Accept: */*
Through our proxy
curl http://192.168.249.243
Hostname: whoami-6d9458855d-zjs69
IP: 127.0.0.1
IP: ::1
IP: 10.0.2.58
IP: fd00::244
IP: fe80::cc53:c7ff:fe9d:19fd
RemoteAddr: 10.0.2.171:33958
GET / HTTP/1.1
Host: who-svc.whoami.svc.cluster.local
User-Agent: curl/8.4.0
Accept: */*
Connection: close
Create the initial network policies
Our connectivity looks good so far, but that's because everything has unlimited ingress, and egress. In the real world, our clients would not be able to connect directly to the web service, forcing access only through our proxy. So lets set that up.
network policy for our whoami service
So our whoami service should only allow ingress from the proxy, and no egress, so we add a policy like so.
Click the for more information.
cat <<EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: whoami-policy
namespace: whoami
spec:
endpointSelector: {} (1)
ingress:
- fromEndpoints: (2)
- matchLabels:
app: proxy
io.kubernetes.pod.namespace: proxy
toPorts:
- ports:
- port: "80"
egress:
- {} (3)
EOF
- Match any Cilium endpoint in the whoami namespace
- Ingress trafic from the k8s namespace proxy, matching the label app=proxy to our ports 80 is allowed
- An empty selector matches all egress, and without any other policy in effect, it will deny.
network policy for our proxy
Our proxy could use outbound traffic to do GeoIP lookups from get.geojs.io, and will allow ingress from anywhere outside the cluster. A policy to effect that could look like this.
Click the for more information.
cat <<EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: proxy-policy
namespace: proxy
spec:
endpointSelector: {} (1)
egress:
- toFQDNs: (2)
- matchName: get.geojs.io
toPorts:
- ports:
- port: "443"
- toEndpoints: (3)
- matchLabels:
io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
rules:
dns:
- matchName: "get.geojs.io"
EOF
- Match any Cilium endpoint in the namespace proxy
- This is where CiliumNetworkPolicy shines compared to vanilla kubernetes, allow egress based on a FQDN, get.geojs.io in this case
- Another example of Cilium super-powers. This allows egress from proxy pods to the kube-dns service, but only allows lookups of one hostname, get.geojs.io. Any other request will be refused.
Use the Cilium hubble observation framework to verify
So now we should not be able to get a response directly from our web service, let's try that out.
curl --connect-timeout 3 http://192.168.249.149 (☸|devcluster)0 130 12:40:20❯
curl: (28) Failed to connect to 192.168.249.149 port 80 after 3002 ms: Timeout was reached
Great! But how do we know it was our network policy and not another random error ? Well, Hubble to the rescue. In another terminal get things ready:
We need to forward some internal hubble ports locally:
Now run hubble itself:
and try our curl command again, hubble will show us the dropped packets:
Mar 4 23:40:30.060: 192.168.1.115:58148 (world-ipv4) <> whoami/whoami-6d9458855d-zjs69:80 (ID:133101) Policy denied DROPPED (TCP Flags: SYN)
Mar 4 23:40:31.106: 192.168.1.115:58148 (world-ipv4) <> whoami/whoami-6d9458855d-zjs69:80 (ID:133101) Policy denied DROPPED (TCP Flags: SYN)
Mar 4 23:40:32.047: 192.168.1.115:58148 (world-ipv4) <> whoami/whoami-6d9458855d-zjs69:80 (ID:133101) Policy denied DROPPED (TCP Flags: SYN)
Looks good! What about our proxy ? Well, if we try to access a hostname other than get.geojs.io we see:
And if we try get.geojs.io on a port other than 443 we the drop in hubble:
hubble observe -n proxy -t drop -f
Mar 4 23:43:54.801: proxy/proxy-78787986d7-wljgx:53734 (ID:134393) <> get.geojs.io:80 (ID:16777217) Policy denied DROPPED (TCP Flags: SYN)
Mar 4 23:43:55.002: proxy/proxy-78787986d7-wljgx:53420 (ID:134393) <> get.geojs.io:80 (ID:16777220) Policy denied DROPPED (TCP Flags: SYN)
Mar 4 23:43:55.813: proxy/proxy-78787986d7-wljgx:53734 (ID:134393) <> get.geojs.io:80 (ID:16777217) Policy denied DROPPED (TCP Flags: SYN)
Mar 4 23:43:56.005: proxy/proxy-78787986d7-wljgx:53420 (ID:134393) <> get.geojs.io:80 (ID:16777220) Policy denied DROPPED (TCP Flags: SYN)
And we're done... for now.
Mission accomplished! You've use kubernetes to create deployments, you've cre
You have created:
- Deployments instead of bare pods
- Replicasets for high-availability
- Services with external reachability via BGP announcements
- Effective network policies limits ingress, and egress traffic for a workload
- Used Cilium super-powers to make our network policies easy
- Used Hubble to prove our policies are effective
We're ready to move on with all the bits and pieces we need for the rest of our infrastructure.