Yet another “Kubernetes with Raspberry Pi” post

There’s a ton of the tutorials on how to get Kubernetes installed onto your Raspberry Pi, so… let’s write another one. 😊

As mentioned in my last post, I’ve found my forgotten Raspberry Pi, and played around with installing and configuring Raspbian Buster on it.

Today, I wanted to check if it will be possible to install Kubernetes onto such small machine – they are many articles on the “widest of the world’s webs” that say “Yes, it can be done!“, so I’ve decided to give it a try! And I chose to follow one of them (seemed like a nice reference).

As you remember, I’m starting with a cleanly installed (and just slightly customized) Raspbian Buster and building it from there.

And I’ll be using kubeadm for installing my cluster.

So, once I had at least two machines (my Raspberry Pi for the “control plane” and Ubuntu 20.04 LTS Hyper-V virtual machine as the “node” – you can read more about it here), I prepared them like this:

  • install Docker (in my case)
  • change the default cgroups driver for Docker to systemd
  • add cgroups limit support (for my Raspberry Pi 3)
  • configure iptables
  • disable swap (this one was a bit challenging)
  • prepare for Kubernetes installation (source, keys, kubeadm)
  • install Kubernetes “control plane”
  • add flannel
  • add a node to the cluster
  • test with some workload

One thing that bothered me (on Buster) was disabling swap in a way that it also stays disabled after a reboot (I know, it’s the details that eventually get you) – after a while, I’ve stumbled on this forum post and the solution provided by powerpetedid the trick! Thank you, @powerpete! 😊

And finally, details about the each step are here (outputs are commented and somewhat redacted/condensed):

Seems to be working (😊):

Cheers!

P.S. I’ve read about some having issues with flannel and using other network options (didn’t have this one). Also, if you’ll have issues with iptables (v1.8+), maybe you’ll need to switch to legacy version (didn’t have this one either).

4 Comments

  1. When I get this thios stage, it fails:
    # add node to the cluster (pinode, 192.168.12.102)
    sudo kubeadm join 192.168.12.101:6443 –token rqyytr.f3pvvz2yepq51b7h –discovery-token-ca-cert-hash sha256:5b04368aaee148fa3eb8c6aed3c3bc8041248590e2055a81617d16d8f796bb77
    sudo kubeadm join 10.55.0.248:6443 –token ******.******************* –discovery-token-ca-cert-hash sha256:******************************************************************
    [preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: hugetlb
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileAvailable–etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
    [ERROR Port-10250]: Port 10250 is in use
    [ERROR FileAvailable–etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
    [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
    To see the stack trace of this error execute with –v=5 or higher
    pi@k3s-master:~ $ kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k3s-master Ready control-plane,master 3m29s v1.23.5
    Any suggestions thanks

    Reply
    • Are you perhaps adding the same node to the cluster? It seems so – it seems you’re on a “pimaster” node and running “kubeadm join” command. You should be on the second node (“pinode”) and join that one to the cluster running on “pimaster”. Or I missed something? 🙂

      Reply
      • That is what I was doing, I relised. Can I not add the master as a node to cluster? It seems to be a waste only doing the managment and nothing else.

        Reply
        • No, it’s already “in the cluster” and running the “control plane” pods. If you want a single node cluster, you can eventually try to “untaint” the master node, thus allowing it to run “normal” workloads as well. Command should be something like “kubectl taint nodes –all node-role.kubernetes.io/master-“.

          Tom

          Reply

Leave a Comment.