thoughtexpo

... @leelavg's corner on the internet, night or day, small or big.

Setup k3d Cluster for local Testing or Development

2021-Mar-10 • Tags: k3d, docker, kubernetes

There are over 90 Certified Kubernetes offerings as of this blog publication. One such project currently under Linux Foundation is k3s and below quote is directly taken from it's landing page:

K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

Clubbing two words resource-constrained and IoT from the quote we can infer that this distribution is also best suited for setting up our local Kubernetes cluster. Going a step further and inspired by KinD project, k3d is created which runs k3s Kubernetes distribution in docker and as a result k3d has a single dependency which is docker.

Pre-requisites §

Conceptual knowledge of docker and kubernetes to utilize k3d created cluster, you can refer below resources*:

*Non-affiliated, I referred all these resources atleast once and so recommending them

Although docker is updated to work with cgroup v2, I was only able to setup k3d cluster on falling back to cgroup v1 using below method:

Fedora recommended option is to use podman and buildah however the steps in this post tested using docker.

I used Fedora 32 Server Edition with 8Gi of RAM, /var parition mounted on a 100GB (having free space on /var partition never hurts 😄 ) disk, tested installation and subsequent operations as root (just having sudo access also suffices) user. YMMV may vary depending on how well-versed you are with your machine.

Installing Binary §

k3d version 4 is preferred as it has k3d-managed registry and it comes handy to create a repository along with cluster creation with no extra steps.

-> curl -OL https://github.com/rancher/k3d/releases/download/v4.2.0/k3d-linux-amd64
-> chmod +x k3d-linux-amd64
-> mv k3d-linux-amd64 /usr/local/bin/k3d

After downloading binary you can verify the verison and that's all it needs to create a cluster

-> k3d version
k3d version v4.2.0
k3s version v1.20.2-k3s1 (default)

Cluster Operations §

Let's go through lifecycle of a k3d cluster and later we can move on to customizing the cluster to our needs. Please refer docs for command tree

1# Create a cluster with One master (-s/--server) and Three worker (-a/--agent) nodes
2-> k3d cluster create test -s 1 -a 3
3[...]
4
5-> k3d cluster list
6NAME SERVERS AGENTS LOADBALANCER
7test 1/1 3/3 true
8
9-> k3d node list
10NAME ROLE CLUSTER STATUS
11k3d-test-agent-0 agent test running
12k3d-test-agent-1 agent test running
13k3d-test-agent-2 agent test running
14k3d-test-server-0 server test running
15k3d-test-serverlb loadbalancer test running
16
17-> docker ps -a
18CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19e2380067ded8 rancher/k3d-proxy:v4.2.0 "/bin/sh -c nginx-pr…" 21 hours ago Up 21 hours 80/tcp, 0.0.0.0:38871->6443/tcp k3d-test-serverlb
201a181b9a04b3 rancher/k3s:v1.20.2-k3s1 "/bin/k3s agent" 21 hours ago Up 21 hours k3d-test-agent-2
211df295350238 rancher/k3s:v1.20.2-k3s1 "/bin/k3s agent" 21 hours ago Up 21 hours k3d-test-agent-1
22b2846655286c rancher/k3s:v1.20.2-k3s1 "/bin/k3s agent" 21 hours ago Up 21 hours k3d-test-agent-0
233aae96cd4797 rancher/k3s:v1.20.2-k3s1 "/bin/k3s server --t…" 21 hours ago Up 21 hours k3d-test-server-0
24
25-> netstat -tlpn
26Active Internet connections (only servers)
27Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
28tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 921/sshd: /usr/sbin
29tcp 0 0 0.0.0.0:38871 0.0.0.0:* LISTEN 2450824/docker-prox
30tcp6 0 0 :::9090 :::* LISTEN 1/systemd
31tcp6 0 0 :::22 :::* LISTEN 921/sshd: /usr/sbin
32
33-> kubectl get nodes -o wide
34NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
35k3d-test-agent-1 Ready <none> 20h v1.20.2+k3s1 172.28.0.4 <none> Unknown 5.9.16-100.fc32.x86_64 containerd://1.4.3-k3s1
36k3d-test-agent-2 Ready <none> 20h v1.20.2+k3s1 172.28.0.5 <none> Unknown 5.9.16-100.fc32.x86_64 containerd://1.4.3-k3s1
37k3d-test-agent-0 Ready <none> 20h v1.20.2+k3s1 172.28.0.3 <none> Unknown 5.9.16-100.fc32.x86_64 containerd://1.4.3-k3s1
38k3d-test-server-0 Ready control-plane,master 20h v1.20.2+k3s1 172.28.0.2 <none> Unknown 5.9.16-100.fc32.x86_64 containerd://1.4.3-k3s1
39
40-> docker exec k3d-test-server-0 sh -c 'ctr version'
41Client:
42 Version: v1.4.3-k3s1
43 Revision:
44 Go version: go1.15.5
45
46Server:
47 Version: v1.4.3-k3s1
48 Revision:
49 UUID: 2d6b816f-3d50-408b-a98f-0415b293b440
50

We can infer below from creating the cluster:

  1. We got a loadbalancer(nginx lb) with the cluster (line 15) and can be reached at port 38871 on localhost (lines 19, 29)
  2. We can provide --api-port PORT while creating a cluster to make sure lb always use that internally
  3. k3d uses containerd runtime for running containers (lines 35, 42)
  4. We can't share local docker images directly with k3d nodes, either we need to save local images to tar and import into k3d cluster or create a local registry
  5. For accessing services from pods deployed in k3d cluster, we need to deploy ingress rules, controller and thus should have a rough idea of the services that we'll be using before creating the cluster itself or for testing/debugging we can use kubectl port-forward functionality

I highly recommend going through the docs for customizing the cluster during creation, as I'm fine with defaults and mostly concerned with storage I didn't explore networking and other components enough in k3d to blog about.

As we don't want to pull images always from remote repository we'll be concentrating on point 4 from above for using local docker images.

NOTE: If k3d nodes are tainted, tolerations inhibit pod scheduling, before scheduling any pods and after verifying nodes are online, remove taints by running following command:

# Verify presence of taints on the nodes (jq is a command line JSON Processor)
-> kubectl get nodes -o json | jq '.items[].spec.taints'
[
  {
    "effect": "NoSchedule",
    "key": "node.cloudprovider.kubernetes.io/uninitialized",
    "value": "true"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node.cloudprovider.kubernetes.io/uninitialized",
    "value": "true"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node.cloudprovider.kubernetes.io/uninitialized",
    "value": "true"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node.cloudprovider.kubernetes.io/uninitialized",
    "value": "true"
  }
]

If taints are found in above o/p, remove them by running ('-' at the end of command unset the value)

# for name in $(kubectl get nodes -o jsonpath={'..name'}); do kubectl taint nodes $name node.cloudprovider.kubernetes.io/uninitialized-; done;

Optimizing workflow §

We'll look at two scenarios of using local docker images in k3d cluster. One will be docker save and import into cluster, second is using a local registry. Both of the methods have use cases associated.

Save and Import §

Let's deploy a busybox container, find the image source from k3d container runtime and pull that image locally from docker.

-> bat deployment-busybox.yaml --plain
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: test-pod
        image: busybox
        imagePullPolicy: IfNotPresent
        command:
          - '/bin/tail'
          - '-f'
          - '/dev/null'
        livenessProbe:
          exec:
            command:
              - 'sh'
              - '-ec'
              - 'df'
          initialDelaySeconds: 3
          periodSeconds: 3

Apply the manifest and verify pod creation, please not when a k3d cluster is created kubectl context is set to use newly created cluster, thus kubectl is able to access the Kubernetes API and it's resources without any manual interventions.

1-> kubectl apply -f deployment-busybox.yaml
2deployment.apps/test created
3
4-> kubectl get deploy test
5NAME READY UP-TO-DATE AVAILABLE AGE
6test 1/1 1 1 82s
7
8-> kubectl get pods -o wide | grep test
9test-d77db976d-qxsvr 1/1 Running 0 2m46s 10.42.2.14 k3d-test-agent-1 <none> <none>

We can see from above the pod is running on k3d-test-agent-1, now we'll query the image existing in that node and pull the image locally from remote repo/hub.

-> docker exec k3d-test-agent-1 sh -c 'ctr image list -q | grep "busybox:latest"'
docker.io/library/busybox:latest

# Incase if you do not know the image name, you can query that pod spec
-> kubectl get pod test-d77db976d-qxsvr -o jsonpath={'..image'}
docker.io/library/busybox:latest busybox

# Pulling image based on ctr images from k3d node
-> for image in $(docker exec k3d-test-agent-1 sh -c 'ctr image list -q | grep "busybox:latest"'); do docker pull $image; done;
latest: Pulling from library/busybox
8b3d7e226fab: Pull complete 
Digest: sha256:ce2360d5189a033012fbad1635e037be86f23b65cfd676b436d0931af390a2ac
Status: Downloaded newer image for busybox:latest
docker.io/library/busybox:latest

# (or)

# Pulling image based on currently deployed pods
-> for image in $(kubectl get pod test-d77db976d-qxsvr -o jsonpath="{..image}"); do docker pull $image; done;
Using default tag: latest
latest: Pulling from library/busybox
Digest: sha256:ce2360d5189a033012fbad1635e037be86f23b65cfd676b436d0931af390a2ac
Status: Image is up to date for busybox:latest
docker.io/library/busybox:latest
latest: Pulling from library/busybox
Digest: sha256:ce2360d5189a033012fbad1635e037be86f23b65cfd676b436d0931af390a2ac
Status: Image is up to date for busybox:latest
docker.io/library/busybox:latest

# Verify image exists in local docker
-> docker images | grep busybox
busybox                                  latest         a9d583973f65   13 hours ago        1.23MB
busybox                                  stable         a9d583973f65   13 hours ago        1.23MB

Now that we have images pulled from repo/hub and exists locally, we can save them in a tar with correct tags and import them into k3d after the cluster is created

-> docker save $(docker images --format '{{.Repository}}:{{.Tag}}' | grep busybox) -o localimages.tar

# Delete earlier created cluster (or) you can create a new cluster and import above created tarball
-> k3d cluster delete test
[...]

-> k3d cluster create test
[...]

# Perform below before deploying busybox
-> k3d image import -k localimages.tar -c test
[...]

After above operation, image existing in the k3d cluster is used for running container. -k option is for not deleting local tarball once uploaded to cluster and -c specifies the cluster name.

Use case:

Caveat:

Local Registry §

As per the docs k3d has inbuilt capability for creating a registry associated with cluster itself however I'm just using docker for running a local registry and connecting k3d network with registry container.

Let's take a detour from k3d and use docker to build a image. I don't want to re-hash the details/best practices about building images as they are explained in greater detail in docker documentation, I recommend going through storagedriver and multistage builds at a minimum.

Let's build a minimal image which i generally use to verify checksum and create IO in a storage system and I'll touch upon some docker concepts along the way.

1-> mkdir -p localimage && cd $_
2-> bat Dockerfile --plain
3# Base image in https://github.com/Docker-Hub-frolvlad/docker-alpine-python3
4FROM frolvlad/alpine-python3 AS compile
5RUN apk add --no-cache gcc musl-dev git python3-dev && mkdir /opt/bin
6RUN wget https://raw.githubusercontent.com/avati/arequal/master/arequal-checksum.c
7RUN wget https://raw.githubusercontent.com/avati/arequal/master/arequal-run.sh -P /opt/bin/
8RUN sed -i 's/bash/sh/' /opt/bin/arequal-run.sh
9RUN gcc -o /opt/bin/arequal-checksum arequal-checksum.c && chmod +x /opt/bin/arequal*
10RUN python3 -m venv /opt/venv
11ENV PATH="/opt/venv/bin:$PATH"
12RUN pip install git+https://github.com/vijaykumar-koppad/Crefi.git@7c17a353d19666f230100e92141b49c29546e870
13
14FROM frolvlad/alpine-python3 AS build
15RUN apk add --no-cache rsync
16COPY --from=compile /opt /opt
17
18ENV PATH="/opt/venv/bin:/opt/bin:$PATH"
19CMD ["sh"]

About above dockerfile:

Build the docker image and observe some other details after image creation

1-> docker build -t test-fs:latest .
2Sending build context to Docker daemon 2.56kB
3Step 1/14 : FROM frolvlad/alpine-python3 AS compile
4latest: Pulling from frolvlad/alpine-python3
5596ba82af5aa: Pull complete
6911eb5656b83: Pull complete
7Digest: sha256:69f108d85ddb473123c5fdae3f415aee900f0bccd2e78523f7ceba23a9688b0e
8Status: Downloaded newer image for frolvlad/alpine-python3:latest
9 ---> 80484c205b65
10Step 2/14 : RUN apk add --no-cache gcc musl-dev git python3-dev && mkdir /opt/bin
11 ---> Running in 273d48be6da8
12 [...]
13Removing intermediate container b20b7b5fb2dc
14 ---> 98ccc0c7149f
15Step 14/14 : CMD ["sh"]
16 ---> Running in fcd348b9e8d6
17Removing intermediate container fcd348b9e8d6
18 ---> 901544a01eb2
19Successfully built 901544a01eb2
20Successfully tagged test-fs:latest
21
22-> docker images | grep fs
23test-fs latest 901544a01eb2 About a minute ago 72.8MB
24
25-> docker history test-fs
26IMAGE CREATED CREATED BY SIZE COMMENT
27901544a01eb2 About a minute ago /bin/sh -c #(nop) CMD ["sh"] 0B
2898ccc0c7149f About a minute ago /bin/sh -c #(nop) ENV PATH=/opt/venv/bin:/o… 0B
2922ed071e4b27 About a minute ago /bin/sh -c #(nop) COPY dir:b4fca6fe0f106c874… 12.2MB
302864ccf4ba22 About a minute ago /bin/sh -c apk add --no-cache rsync 1.56MB
3180484c205b65 6 weeks ago /bin/sh -c echo "**** install Python ****" & 53.5MB
32<missing> 6 weeks ago /bin/sh -c #(nop) ENV PYTHONUNBUFFERED=1 0B
33<missing> 7 weeks ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
34<missing> 7 weeks ago /bin/sh -c #(nop) ADD file:edbe213ae0c825a5b… 5.61MB

About above image creation:

Well, that's it for the interlude, coming back to k3d, we'll follow below steps in brief:

# Start registry container
-> docker container run -d --name registry.localhost --restart always -p 5000:5000 registry:2

# Attach container to k3d cluster network
-> docker network connect k3d-test registry.localhost

# Tag local image with local registry
-> docker tag test-fs:latest registry.localhost:5000/test-fs:latest

# Push tagged image to local registry
-> docker push registry.localhost:5000/test-fs:latest

After performing above operations, you can use image: registry.localhost:5000/test-fs:latest in deployment yaml file and use the image from local registry.

Use case:

Caveat:

-> bat /usr/lib/systemd/system/docker.service  | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry registry.localhost:5000
# Create below file
-> bat ~/.k3d/registries.yaml --plain
mirrors:
  "registry.localhost:5000":
    endpoint:
      - "http://registry.localhost:5000"

# Supply registries.yaml location while creating cluster
-> k3d cluster create test -a 3 -v $HOME/.k3d/registries.yaml:/etc/rancher/k3s/registries.yaml

Well, that brings us to the end of the blog post. I covered only the setup and opinionated workflow while testing/debugging kubernetes workloads in k3d, intentionally left out the usage of local registry in resource deployments as I intend to cover that in a later post.

Stay tuned to learn about Container Storage Interface (CSI) and how to work with CSI by deploying the resources in a k3d cluster.

Send an email for any comments. Kudos for making it to the end. Thanks!