Exploring Kubernetes with K3d: A Kubernetes Cluster
I’ve been continuing my exploration of Kubernetes, via the container-based
k3d
implementation. In my previous article,
I created a REST
service. In this article, I start up a Kubernetes cluster
and deploy the REST
service to it.
Kubernetes and K3s and K3d, Oh My!
To start with, it’s worth a quick recap of kubernetes
, k3s
and k3d
.
I’ll assume that if you’re reading this, you already know what docker
is. If
not, it will be worth looking up docker and then coming back here. You could
also check out some of my previous posts about docker.
My previous article is particularly relevant,
since it describes the docker container I will be using later in this article.
So, assuming you have a lot of docker containers to manage, kubernetes is a platform which allows you to orchestrate those containers. It takes care of (or at least simplifies) activities such as deploying containers, providing access to them, and scaling them up and down.
Although kubernetes has effectively become the industry standard for container orchestration, it can be difficult to run and uses quite a lot of resources. For situations where that might be problematic (for example where resource is limited, like in IoT devices), k3s was created as a lightweight implementation. It is a certified kubernetes distribution, but packaged as a single binary to reduce dependencies and complexity. This means that it can be run on even a small device like a Raspberry Pi.
The name k3s is a play on the use of the term “k8s” to refer to kubernetes. If you haven’t encountered this type of abbreviation before, it takes the first and last letters of a word and replaces the rest of the word with the number of letters between the first and last. So “internationalisation” is a common example, which is abbreviated as “i18n” — the first letter “i”, the last letter “n”, and a count of the number of letters between them. “a11y” is another common abbreviation meaning “accessibility”. Using this pattern kubernetes is frequently abbreviated to “k8s”. So, when looking for a name for this smaller implementation, it was decided to call it “k3s” since that is half the size of “k8s”.
Moving on to k3d, we start getting into Inception-level territory, where the k3s binary itself is run in docker. So, instead of installing the binaries onto my computer, I can just pull everything I need as docker images and run them. I decided to use “k3d” for this reason, since it leaves the smallest footprint on my computer.
Spinning up K3s
In order to start using k3d, we need to install the k3d binary. Various
installation instructions are available on the k3d github
repo. I chose to use the install script they
supply. This creates a k3d
command which you use to create, manage and
delete your k3d instance (or “k3d cluster”).
While we are installing tools, we should also install the kubectl
command
(pronounced in a variety of different ways) which is the standard tool for
managing kubernetes. Again there are a variety of options depending on your
operating system and preferences. Since I’m running a standard(-ish) Linux
distro, I used native package management for my installation. The available
methods you can use are documented on the main kubernetes
site.
Now we’ve got our k3d
command installed, we can use it to create our
kubernetes cluster. The simplest thing we can do is use it to run cluster create
, which will use all the default settings:
$ k3d cluster create
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default' (1137...)
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0001] Starting cluster 'k3s-default'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-k3s-default-server-0'
INFO[0006] Starting agents...
INFO[0006] Starting helpers...
INFO[0006] Starting Node 'k3d-k3s-default-serverlb'
INFO[0007] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0009] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap
INFO[0009] Cluster 'k3s-default' created successfully!
INFO[0009] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0009] You can now use it like this:
kubectl config use-context k3d-k3s-default
kubectl cluster-info
We can then take a look at our cluster:
$ k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
k3s-default 1/1 0/0 true
$ node list
NAME ROLE CLUSTER STATUS
k3d-k3s-default-server-0 server k3s-default running
k3d-k3s-default-serverlb loadbalancer k3s-default running
We can see that k3d has created a cluster called k3s-default
and created two
nodes in it — a server and a load balancer. The server node is the one we
can interact with to control the kubernetes cluster, and it is also where we
can deploy our docker containers. The load balancer will provide a route into
the docker containers running on the cluster for our end users (depending on
the permissions we set up).
Since we now have a running kubernetes instance, we can also use the kubectl
command to inspect it. We can start by checking which nodes kubernetes itself
can see:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-k3s-default-server-0 Ready control-plane,master 4m48s v1.20.7+k3s1
This shows the server that k3d has created. We can also check to see if there are any “pods” running — pods are groups of one or more docker containers.
$ kubectl get pods
No resources found in default namespace.
Unsurprisingly, there are no pods deployed, because we haven’t deployed our
service yet. However, the response is only checking the default
namespace.
Let’s see what other namespaces we have:
$ kubectl get namespaces
NAME STATUS AGE
default Active 11m
kube-system Active 11m
kube-public Active 11m
kube-node-lease Active 11m
That’s interesting, we also have some namespaces which belong to kubernetes. Let’s see what pods we can find there:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-5ff76fc89d-9jws8 1/1 Running 0 12m
kube-system coredns-854c77959c-2pp6s 1/1 Running 0 12m
kube-system metrics-server-86cbb8457f-8tn2h 1/1 Running 0 12m
kube-system helm-install-traefik-mhd6c 0/1 Completed 0 12m
kube-system svclb-traefik-qbk9r 2/2 Running 0 11m
kube-system traefik-6f9cbd9bd4-tvcz4 1/1 Running 0 11m
Now we can see some interesting things in that output. There is a DNS server
and a metrics server. We can also see that the load balancer used by default
is traefik
(I’ve written an article about
traefik previously). And it looks like
helm
was used to install traefik.
Finally, if we’ve finished looking around our kubernetes cluster, we can tear the whole thing down:
$ k3d cluster delete
INFO[0000] Deleting cluster 'k3s-default'
INFO[0000] Deleted k3d-k3s-default-serverlb
INFO[0000] Deleted k3d-k3s-default-server-0
INFO[0000] Deleting cluster network 'k3d-k3s-default'
INFO[0001] Deleting image volume 'k3d-k3s-default-images'
INFO[0001] Removing cluster details from default kubeconfig...
INFO[0001] Removing standalone kubeconfig file (if there is one)...
INFO[0001] Successfully deleted cluster k3s-default!
So this shows how we can spin up kubernetes (k3d) and tear it down again.
Configuring K3d
Having successfully got kubernetes up and running via k3d, I now want to
customise it, to make it useful to me. Firstly, I want to give it the name
banking
, so it makes sense when I deploy my banking testdata
microservice
to it. Secondly, I want to add some “agents” to the cluster. These will be
controlled by the server node, and will give me additional places to deploy my
pods. Finally, I want to add a docker registry to the cluster, so I have
somewhere for the docker images needed for my microservices.
Naming the cluster is easy, I just append what I want to call it to the end of
the create command. I can use the --agents
parameter to say how many agent
nodes I want, and I can use the --registry-create
parameter to tell k3d to
create (and manage) my docker registry. Putting that all together, I get this:
$ k3d cluster create --agents 2 --registry-create banking
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-banking' (e1d...)
INFO[0000] Created volume 'k3d-banking-images'
INFO[0000] Creating node 'k3d-banking-registry'
INFO[0000] Successfully created registry 'k3d-banking-registry'
INFO[0001] Creating node 'k3d-banking-server-0'
INFO[0001] Creating node 'k3d-banking-agent-0'
INFO[0001] Creating node 'k3d-banking-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-banking-serverlb'
INFO[0001] Starting cluster 'banking'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-banking-server-0'
INFO[0006] Starting agents...
INFO[0006] Starting Node 'k3d-banking-agent-0'
INFO[0019] Starting Node 'k3d-banking-agent-1'
INFO[0026] Starting helpers...
INFO[0026] Starting Node 'k3d-banking-registry'
INFO[0027] Starting Node 'k3d-banking-serverlb'
INFO[0027] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0034] Successfully added host record to /etc/hosts in 5/5 nodes and to the CoreDNS ConfigMap
INFO[0035] Cluster 'banking' created successfully!
INFO[0035] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0035] You can now use it like this:
kubectl config use-context k3d-banking
kubectl cluster-info
You can see from the command output that it has named the cluster “banking” and has also created a registry and two agents. We can verify this:
$ k3d node list
NAME ROLE CLUSTER STATUS
k3d-banking-agent-0 agent banking running
k3d-banking-agent-1 agent banking running
k3d-banking-registry registry banking running
k3d-banking-server-0 server banking running
k3d-banking-serverlb loadbalancer banking running
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-banking-agent-0 Ready <none> 7m37s v1.20.7+k3s1
k3d-banking-agent-1 Ready <none> 7m29s v1.20.7+k3s1
k3d-banking-server-0 Ready control-plane,master 7m47s v1.20.7+k3s1
It’s also interesting to take a look at the docker containers running on our host:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c73aadc1b2ca rancher/k3d-proxy:v4.4.4 "/bin/sh -c nginx-pr…" 13 minutes ago Up 13 minutes 80/tcp, 0.0.0.0:43731->6443/tcp k3d-banking-serverlb
d68516ceb877 rancher/k3s:v1.20.7-k3s1 "/bin/k3s agent" 13 minutes ago Up 13 minutes k3d-banking-agent-1
36171a640d50 rancher/k3s:v1.20.7-k3s1 "/bin/k3s agent" 13 minutes ago Up 13 minutes k3d-banking-agent-0
e1d8da0a2598 rancher/k3s:v1.20.7-k3s1 "/bin/k3s server --t…" 13 minutes ago Up 13 minutes k3d-banking-server-0
70026add4640 registry:2 "/entrypoint.sh /etc…" 13 minutes ago Up 13 minutes 0.0.0.0:42371->5000/tcp k3d-banking-registry
This shows how the nodes that k3d has created are running as docker containers
on the host machine. The server node and the two agent nodes all use the k3s
image. The load balancer uses a k3d-proxy
container, and the registry node
is just the standard docker registry container.
Note that when we want to delete this cluster, because we have named it, we need to specify its name:
$ k3d cluster delete banking
INFO[0000] Deleting cluster 'banking'
INFO[0000] Deleted k3d-banking-serverlb
INFO[0000] Deleted k3d-banking-agent-1
INFO[0001] Deleted k3d-banking-agent-0
INFO[0001] Deleted k3d-banking-server-0
INFO[0002] Deleted k3d-banking-registry
INFO[0002] Deleting image volume 'k3d-banking-images'
INFO[0002] Removing cluster details from default kubeconfig...
INFO[0002] Removing standalone kubeconfig file (if there is one)...
INFO[0002] Successfully deleted cluster banking!
Let’s finish this section by making the configuration more manageable. Rather than supplying everything as arguments on the command line, let’s move them to a config file:
apiVersion: k3d.io/v1alpha2
kind: Simple
name: banking
servers: 1
agents: 2
ports:
- port: 8080:80
nodeFilters:
- loadbalancer
registries:
create: true
The apiVersion
indicates that this is a k3d config file and we also specify
that this is the Simple
config format. The command line options from earlier
can be readily seen in that config file. The only other thing I’ve added is
a ports
specification, so we can access anything we deploy to our cluster.
To break this down, the first thing to note is that nodeFilters
has a value
of loadbalancer
. This means we want to expose a port from the load balancer.
If you recall, the load balancer node is the k3d-proxy
container in our
earlier docker
command:
c73aadc1b2ca rancher/k3d-proxy:v4.4.4 "/bin/sh -c nginx-pr…" 13 minutes ago Up 13 minutes 80/tcp, 0.0.0.0:43731->6443/tcp k3d-banking-serverlb
This shows that the load balancer is exposing its service on port 80. So, to
make that available on port 8080 on our host, we specify the port as 8080:80
.
Putting that all together, we end up with the ports
definition in the above
config file.
Now we have a config file, we can bring up the cluster just by specifying its location:
$ k3d cluster create --config ./config.yaml
INFO[0000] Using config file ./config.yaml
INFO[0000] Prep: Network
INFO[0000] Re-using existing network 'k3d-banking' (e1d1aed0854c67507ab65e21ad3aee53dfe8d1e204b83e111bef1dbc726b0f89)
INFO[0000] Created volume 'k3d-banking-images'
INFO[0000] Creating node 'k3d-banking-registry'
INFO[0000] Successfully created registry 'k3d-banking-registry'
INFO[0001] Creating node 'k3d-banking-server-0'
INFO[0001] Creating node 'k3d-banking-agent-0'
INFO[0001] Creating node 'k3d-banking-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-banking-serverlb'
INFO[0001] Starting cluster 'banking'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-banking-server-0'
INFO[0006] Starting agents...
INFO[0006] Starting Node 'k3d-banking-agent-0'
INFO[0019] Starting Node 'k3d-banking-agent-1'
INFO[0026] Starting helpers...
INFO[0026] Starting Node 'k3d-banking-registry'
INFO[0027] Starting Node 'k3d-banking-serverlb'
INFO[0027] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0034] Successfully added host record to /etc/hosts in 5/5 nodes and to the CoreDNS ConfigMap
INFO[0035] Cluster 'banking' created successfully!
INFO[0035] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0035] You can now use it like this:
kubectl config use-context k3d-banking
kubectl cluster-info
Now we have a port exposed, we can try hitting it:
$ curl --dump - http://localhost:8080/
HTTP/1.1 404 Not Found
Content-Type: text/plain; charset=utf-8
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
Date: Sun, 12 Sep 2021 21:17:36 GMT
Content-Length: 19
404 page not found
This suggests that we have been successful in setting everything up, since we
can now get a response. It is also reassuring that we get a 404 Page Not Found
response, since we haven’t deployed anything to our cluster. So let’s
deploy our testdata service from my previous
article to rectify that.
Making our Container Available in the K3d Registry
Assuming that we have the container we built in my previous article, the first thing we need to do is get a copy of our container into the registry in our k3d cluster. We can use a standard docker technique to do this, where we tag our local docker image with the target registry, then we can push to that registry.
Now, we know from earlier that our registry is k3d-banking-registry
, but we
need to know the port number to complete our tag. We can find that from a
docker ps
command:
$ docker ps -f name=k3d-banking-registry
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9837f8ffe8a4 registry:2 "/entrypoint.sh /etc…" 25 hours ago Up 15 hours 0.0.0.0:39743->5000/tcp k3d-banking-registry
Looking at the PORTS
column in the output, we can see that the standard
docker registry port (5000) is currently mapped to port 39743. I also need
to append localhost
to the domain name, to get the registry to resolve from
(since I am running the command outside the k3d cluster). This gives me a full
address for the registry of:
k3d-banking-registry.localhost:39743
The local image we created is banking-testdata
, so in the kubernetes
registry I will refer to it as:
k3d-banking-registry.localhost:39743/banking-testdata:local
As stated earlier, I need to apply this tag to my local image and then push to the new registry:
docker tag banking-testdata k3d-banking-registry.localhost:39743/banking-testdata:local
docker push k3d-banking-registry.localhost:39743/banking-testdata:local
Now that I’ve pushed it, I can untag the original image:
docker rmi k3d-banking-registry.localhost:39743/banking-testdata:local
To make this easier to run, I have wrapped this into a script:
#!/bin/bash
# Populate k3d's registry with images from the local registry
registryLabel="k3d-banking-registry"
registryPort=$( docker ps -f name=k3d-banking-registry | grep -o ":[0-9]*->5000" | sed 's/:\([^-]*\)-.*/\1/' )
if [[ "$registryPort" == "" ]] ; then
echo "ERROR: Unable to find k3d registry to connect to!"
exit 1
fi
declare -a images=(
"banking-testdata"
)
for image in "${images[@]}" ; do
imageName=$( echo $image | cut -d'/' -f2 )
newTag="${registryLabel}.localhost:${registryPort}/${imageName}:local"
docker tag ${image} ${newTag}
docker push ${newTag}
docker rmi ${newTag}
done
I now have a copy of my image in my k3d registry, so I can move on to deploying it.
Deploying a Container to K3d
Kubernetes uses a yaml
file to specify how to deploy pods, services, etc.
This is a declarative approach since it specifies what we want the
deployment to look like (as opposed to a procedural approach, which says
how we want to do the deployment).
Here is my deployment for the testdata service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testdata
labels:
app: testdata
spec:
replicas: 2
selector:
matchLabels:
app: testdata-pod
template:
metadata:
labels:
app: testdata-pod
spec:
containers:
- name: testdata
image: k3d-banking-registry:5000/banking-testdata:local
ports:
- containerPort: 3000
You can find out more about this structure on the Kubernetes site, but some key
things to note here. The metadata
is where we specify the name of this pod
as testdata
. In the spec
section, we define 2 replicas — so we will have
2 instances of the containers running (actually 2 pods). Within the template
section, you can see that the container image we are using is the one we have
just pushed in the previous section. Finally, note that we are indicating
which port is exposed from the container (port 3000).
To make the deployment happen, we use the kubectl
command:
$ kubectl apply -f deployment.yaml
deployment.apps/testdata created
We can check that this has worked, by checking our pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
testdata-74fb8ff77-zklc2 1/1 Running 0 10s
testdata-74fb8ff77-n7twk 1/1 Running 0 10s
Although these pods are running, they aren’t exposed in any meaningful way for
us to test. However, we can use the kubectl exec
command to run a wget
inside the container to check it’s working as expected:
$ kubectl exec testdata-74fb8ff77-zklc2 -- wget -q -O- http://localhost:3000/testdata
{
"collections": [
"customers",
"contacts",
"accounts",
"transactions"
],
"_links": [
"http://localhost:3000/customers",
"http://localhost:3000/contacts",
"http://localhost:3000/accounts",
"http://localhost:3000/transactions"
]
}
This runs a web request on the testdata service, from inside the running
testdata container. As you can see, it returns the expected output, so we can
see that the deployment worked correctly. However, this isn’t very
satisfactory for our users, who will need to access this via a standard web
browser. So, the next thing we need to do is to connect a service to our pods.
Again, this is defined in a yaml
file:
apiVersion: v1
kind: Service
metadata:
labels:
app: testdata
name: testdata-service
spec:
ports:
- port: 3000
targetPort: 3000
selector:
app: testdata-pod
type: ClusterIP
There are a couple of important things to notice in this file. The first is
that we are mapping the port from the our pods to a port the service is
exposing (although both are port 3000). The second thing to note is that we
identify which pods this service connects to via a selector
referring back to
the app
label from our deployment yaml
.
As before, we can use kubectl
to apply this:
$ kubectl apply -f service.yaml
service/testdata-service created
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 26h
testdata-service ClusterIP 10.43.241.33 <none> 3000/TCP 36s
So we now have our pods running, and a service connected to them. The final
thing we need to do is connect up the service to our k3d ingress, so it can be
accessed from outside the kubernetes cluster. We use another yaml
file to do
this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: testdata-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: testdata-service
port:
number: 3000
This contains a routing rule, saying that a request to /
will connect through
to the testdata-service
we’ve just set up, on port 3000. We use kubectl
to
apply this:
$ kubectl apply -f testdata/ingress.yaml
ingress.networking.k8s.io/testdata-ingress created
This has now connected up port 3000 from our testdata service to the ingress,
meaning we can hit it externally. This is listening on port 8080, since that’s
the ingress port we originally defined when we set up our k3d cluster. So, we
can prove it’s all working using curl
:
$ curl http://localhost:8080/testdata | jq .
{
"collections": [
"customers",
"contacts",
"accounts",
"transactions"
],
"_links": [
"http://localhost:3000/customers",
"http://localhost:3000/contacts",
"http://localhost:3000/accounts",
"http://localhost:3000/transactions"
]
}
Summary
This article has demonstrated how to:
- Spin up a kubernetes cluster using k3d
- Push a container image to the cluster’s registry
- Deploy the image to a set of kubernetes pods
- Connect a service to the pods
- Expose the service via the k3d ingress
If you want to see the code discussed in this article, it’s available at ianfinch/k8s-banking-testbed (commit #b7ff …) .