Wanting to learn Kubernetes, I needed to set up a cluster while maintaining a balance between learning the fundamentals and not frustrating myself on the complexity before seeing actual results.

I also want to use the GitLab Agent, as I truly enjoy using GitLab and its CI/CD setup for everyday work. Not sure if all these tools will make life better for my intended use case, I am sure that I’ll learn to use some new and interesting things along the way.

After some reading, I’ve decided to give this a go using kubeadm, use Calico as my CNI, and use hope to guide me through this process while maintaining sanity and not causing too much damage along the way.

My first goal is to have a single node cluster up and running, with a simple web server with a static page, which is to be served through Traefik.

I will do this on a new just-hired VPS with Ubuntu 21.10.

Let’s get to it.

1. Preparing your host

First, lets give our machine a clear name, and add that name to the hosts file.

sudo hostnamectl set-hostname k8s-master
echo "$(curl ifconfig.me) k8s-master" | sudo tee -a /etc/hosts

Then we need to enable a few things to ensure network traffics gets to the right places.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
overlay
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Disable swap

We also need to disable swap. Check if it’s enabled using sudo swapoff -a. If so, remove the entry from /etc/fstab.

Update everything

Also get your system fully up to date, using:

sudo apt update
sudo apt full-upgrade -y

2. Next, we need a container runtime

containerd is my choice for now, based on the flawed opinion that “Docker uses it too”.

You can install it using the steps outlined here but be sure to not install docker-ce or docker-ce-cli, as the current Kubernetes version (1.23) will then use Docker Engine, for which support will be removed in 1.24.

So in a nutshell:

sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install containerd.io

Finally, containerd needs its default config (instead of the one installed using the above lines) with a small change to enable systemd.

containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

3. Now we will install kubeadm1

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

These packages are set on hold because version management is quite important when maintaining a cluster. Upgrading is a manual process.

Now that we’ve fully prepared, time for a clean-up and a reboot.

sudo apt autoremove --purge -y
sudo reboot

4. Time to setup a cluster

After having set up a cluster, I learned that by default Kubernetes either does not assign a default CIDR (or the default CIDR will not work for a default setup of Calico). This has been taken into account in the command below, as it is in the Kubernetes setup quickstart of Calico.

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Then setup access to the cluster with kubectl for your user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

and provide completions for your shell.

kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null

By default the master node does not allow scheduling pods on it due to it being tainted. If this is just your one node, be sure to untaint it so pods will get scheduled on it:

kubectl taint nodes --all node-role.kubernetes.io/master-

5. Helm2

The next things we’ll install using helm. So let’s install that on our cluster machine.

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

6. Now we need a network manager

Calico will be our network manager of choice. Not sure why this would be the best other than popularity.

Let’s get it running:

helm repo add projectcalico https://projectcalico.docs.tigera.io/charts
helm install  calico projectcalico/tigera-operator --namespace tigera-operator --create-namespace --wait

When that is done. Let’s have calicoctl available to control Calico from a pod.

kubectl apply -f https://projectcalico.docs.tigera.io/manifests/calicoctl.yaml --wait

Let’s create an alias on our master node for easy access:

echo 'alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"' >> .bashrc

7. GitLab Agent

Time for the final piece. This is a GUI exercise on GitLab.

  1. Create a group for your Kubernetes things.
  2. Create a project in that group based on the GitLab Cluster Management project template.
  3. Go to that project’s Infrastructure -> Kubernetes clusters.
  4. Click on Connect a cluster (agent).
  5. Click on the box and type a nice and unique name for your agent in the search box.
  6. Click on Create agent: <your_name>.
  7. Run the commands shown, on the host running the cluster.

All done! Now you’re ready to deploy workloads on your new cluster, or add nodes.

8. Adding a node

Adding a node is mostly the same process up to and including step 3. So do that first.

Then make sure ports 10250/tcp and 30000-32767/tcp3 are open and accessible on your master node, and that you can access your node through SSH.

Next we need a token, which expires in 24h. Your initial one might no longer be there, so create one and place it in a variable. We do this on the master node.

kubeadm token create
TOKEN=$(kubeadm token list | tail -n1 | cut -d" " -f1)

Then place the trusted cluster CA key hash in a variable4

CA_KEY_HASH=$(openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //')

Finally we need the advertised address of our API endpoint.

API_ENDPOINT=$(kubectl config view -o jsonpath='{range .clusters[0]}{.cluster.server}{"\n"}{end}' | cut -d/ -f3)

Let’s join our node using SSH.

ssh user@node sudo kubeadm join ${API_ENDPOINT} \
--token ${TOKEN} \
--discovery-token-ca-cert-hash sha256:${CA_KEY_HASH}

Lastly we need to add the node to /etc/hosts.

echo "10.0.0.153 k8s-mars" | sudo tee -a /etc/hosts

To watch your node getting ready, wait for the status to change to Ready using:

watch kubectl get nodes

GitLab Runner5

Upgrading cluster6

As we cannot skip minor versions during an upgrade path, we’ll first place the current and next minor version in two variables. Later on we need the latest patch version, so we store that as well.

Upgrade node

First upgrade the control node(s). When that’s done, we need to repeat these commands on other nodes.

CURRENT_VERSION=$(dpkg -s kubelet | grep Version | cut -d. -f2)
NEXT_VERSION=$((CURRENT_VERSION+1))
PATCH_VERSION=$(apt-cache madison kubelet | grep 1.${NEXT_VERSION} | cut -d. -f3 | cut -d- -f1 | head -n1)

Then we upgrade kubeadm as that tool does the actual upgrade.

sudo apt-mark unhold kubeadm
sudo apt-get update
sudo apt-get install -y kubeadm=1.${NEXT_VERSION}.*
sudo apt-mark hold kubeadm

Now we need to do the actual upgrade.

sudo kubeadm upgrade apply v1.${NEXT_VERSION}.${PATCH_VERSION}

The same goes for other nodes, but replace apply with node on that final command.

kubectl drain ubuntu --ignore-daemonsets

Upgrade the rest

sudo apt-mark unhold kubelet kubectl
sudo apt-get update
sudo apt-get install kubelet=1.${NEXT_VERSION}.* kubectl=1.${NEXT_VERSION}.*
sudo apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet

Sources