There’s a ton of the tutorials on how to get Kubernetes installed onto your Raspberry Pi, so… let’s write another one. 😊
As mentioned in my last post, I’ve found my forgotten Raspberry Pi, and played around with installing and configuring Raspbian Buster on it.
Today, I wanted to check if it will be possible to install Kubernetes onto such small machine – they are many articles on the “widest of the world’s webs” that say “Yes, it can be done!“, so I’ve decided to give it a try! And I chose to follow one of them (seemed like a nice reference).
As you remember, I’m starting with a cleanly installed (and just slightly customized) Raspbian Buster and building it from there.
And I’ll be using kubeadm for installing my cluster.
So, once I had at least two machines (my Raspberry Pi for the “control plane” and Ubuntu 20.04 LTS Hyper-V virtual machine as the “node” – you can read more about it here), I prepared them like this:
- install Docker (in my case)
- change the default cgroups driver for Docker to systemd
- add cgroups limit support (for my Raspberry Pi 3)
- configure iptables
- disable swap (this one was a bit challenging)
- prepare for Kubernetes installation (source, keys, kubeadm)
- install Kubernetes “control plane”
- add flannel
- add a node to the cluster
- test with some workload
One thing that bothered me (on Buster) was disabling swap in a way that it also stays disabled after a reboot (I know, it’s the details that eventually get you) – after a while, I’ve stumbled on this forum post and the solution provided by “powerpete” did the trick! Thank you, @powerpete! 😊
And finally, details about the each step are here (outputs are commented and somewhat redacted/condensed):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 |
# install Docker sudo apt install -yqq docker.io # The following additional packages will be installed: # cgroupfs-mount git git-man libcurl3-gnutls liberror-perl libintl-perl libintl-xs-perl libltdl7 libmodule-find-perl # libmodule-scandeps-perl libnspr4 libnss3 libproc-processtable-perl libsort-naturally-perl libterm-readkey-perl # needrestart runc tini # ... # The following NEW packages will be installed: # cgroupfs-mount docker.io git git-man libcurl3-gnutls liberror-perl libintl-perl libintl-xs-perl libltdl7 # libmodule-find-perl libmodule-scandeps-perl libnspr4 libnss3 libproc-processtable-perl libsort-naturally-perl # libterm-readkey-perl needrestart runc tini # 0 upgraded, 19 newly installed, 0 to remove and 0 not upgraded. # ... # check docker info and change the default cgroups driver for Docker to systemd sudo docker info # ... # Server Version: 18.09.1 # Storage Driver: overlay2 # ... # Logging Driver: json-file # Cgroup Driver: cgroupfs # ... # Kernel Version: 5.4.79-v7+ # Operating System: Raspbian GNU/Linux 10 (buster) # OSType: linux # Architecture: armv7l # CPUs: 4 # Total Memory: 974.4MiB # Name: pimaster # ... # # WARNING: No memory limit support # WARNING: No swap limit support # WARNING: No kernel memory limit support # WARNING: No oom kill disable support # create/edit the /etc/docker/daemon.json file (with the following content, uncommented) and restart docker service sudo nano /etc/docker/daemon.json # { # "exec-opts": ["native.cgroupdriver=systemd"], # "log-driver": "json-file", # "log-opts": { # "max-size": "100m" # }, # "storage-driver": "overlay2" # } sudo systemctl restart docker # docker info now showing the changed cgroup driver sudo docker info # ... # Cgroup Driver: systemd # ... # # WARNING: No memory limit support # WARNING: No swap limit support # WARNING: No kernel memory limit support # WARNING: No oom kill disable support # add cgroups limit support and reboot sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1/' /boot/cmdline.txt # console=tty1 root=PARTUUID=03f5b65e-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1 sudo reboot # recheck docker info (warnings should be gone) sudo docker info # ... # Storage Driver: overlay2 # ... # Logging Driver: json-file # Cgroup Driver: systemd # ... # Operating System: Raspbian GNU/Linux 10 (buster) # ... # configure iptables cat << EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF # configure iptables (add lines to the file, uncommented) and apply the configuration sudo nano /etc/sysctl.d/k8s.conf # net.bridge.bridge-nf-call-ip6tables = 1 # net.bridge.bridge-nf-call-iptables = 1 sudo sysctl --system * Applying /etc/sysctl.d/98-rpi.conf ... # kernel.printk = 3 4 1 3 # vm.min_free_kbytes = 16384 # * Applying /etc/sysctl.d/99-sysctl.conf ... # * Applying /etc/sysctl.d/k8s.conf ... # net.bridge.bridge-nf-call-ip6tables = 1 # net.bridge.bridge-nf-call-iptables = 1 # * Applying /etc/sysctl.d/protect-links.conf ... # fs.protected_hardlinks = 1 # fs.protected_symlinks = 1 # * Applying /etc/sysctl.conf ... # disable swap # check current swap utilization/settings free -m # total used free shared buff/cache available # Mem: 974 82 709 6 182 834 # Swap: 99 0 99 sudo cat /etc/fstab # proc /proc proc defaults 0 0 # PARTUUID=03f5b65e-01 /boot vfat defaults 0 2 # PARTUUID=03f5b65e-02 / ext4 defaults,noatime 0 1 # disable swap (note - disabling the service as well!) sudo dphys-swapfile swapoff sudo dphys-swapfile uninstall sudo update-rc.d dphys-swapfile remove sudo systemctl disable dphys-swapfile.service # Synchronizing state of dphys-swapfile.service with SysV service script with /lib/systemd/systemd-sysv-install. # Executing: /lib/systemd/systemd-sysv-install disable dphys-swapfile # Removed /etc/systemd/system/multi-user.target.wants/dphys-swapfile.service. # optionally - reboot & recheck free -m # total used free shared buff/cache available # Mem: 974 79 745 12 149 832 # Swap: 0 0 0 # prepare for Kubernetes installation (source, keys, kubeadm) and "lock" the kubelet, kubeadm and kubectl versions curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - # OK echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list # deb https://apt.kubernetes.io/ kubernetes-xenial main sudo apt -qq update # All packages are up to date. sudo apt install -yqq kubeadm # The following additional packages will be installed: # conntrack cri-tools ebtables kubectl kubelet kubernetes-cni socat # Suggested packages: # nftables # The following NEW packages will be installed: # conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat # 0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded. # ... sudo apt-mark hold kubelet kubeadm kubectl # kubelet set on hold. # kubeadm set on hold. # kubectl set on hold. # install Kubernetes control plane # generate token for the installation TOKEN=$(sudo kubeadm token generate) echo $TOKEN # rqyytr.f3pvvz2yepq51b7h sudo kubeadm init --token=${TOKEN} --kubernetes-version=v1.19.4 --pod-network-cidr=10.244.0.0/16 # W1208 16:59:02.624049 781 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] # [init] Using Kubernetes version: v1.19.4 # [preflight] Running pre-flight checks # [WARNING SystemVerification]: missing optional cgroups: hugetlb # [preflight] Pulling images required for setting up a Kubernetes cluster # [preflight] This might take a minute or two, depending on the speed of your internet connection # [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' # [certs] Using certificateDir folder "/etc/kubernetes/pki" # [certs] Generating "ca" certificate and key # [certs] Generating "apiserver" certificate and key # [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local pimaster] and IPs [10.96.0.1 192.168.12.101] # [certs] Generating "apiserver-kubelet-client" certificate and key # [certs] Generating "front-proxy-ca" certificate and key # [certs] Generating "front-proxy-client" certificate and key # [certs] Generating "etcd/ca" certificate and key # [certs] Generating "etcd/server" certificate and key # [certs] etcd/server serving cert is signed for DNS names [localhost pimaster] and IPs [192.168.12.101 127.0.0.1 ::1] # [certs] Generating "etcd/peer" certificate and key # [certs] etcd/peer serving cert is signed for DNS names [localhost pimaster] and IPs [192.168.12.101 127.0.0.1 ::1] # [certs] Generating "etcd/healthcheck-client" certificate and key # [certs] Generating "apiserver-etcd-client" certificate and key # [certs] Generating "sa" key and public key # [kubeconfig] Using kubeconfig folder "/etc/kubernetes" # [kubeconfig] Writing "admin.conf" kubeconfig file # [kubeconfig] Writing "kubelet.conf" kubeconfig file # [kubeconfig] Writing "controller-manager.conf" kubeconfig file # [kubeconfig] Writing "scheduler.conf" kubeconfig file # [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" # [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" # [kubelet-start] Starting the kubelet # [control-plane] Using manifest folder "/etc/kubernetes/manifests" # [control-plane] Creating static Pod manifest for "kube-apiserver" # [control-plane] Creating static Pod manifest for "kube-controller-manager" # [control-plane] Creating static Pod manifest for "kube-scheduler" # [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" # [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s # [kubelet-check] Initial timeout of 40s passed. # [kubelet-check] It seems like the kubelet isn't running or healthy. # [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. # [kubelet-check] It seems like the kubelet isn't running or healthy. # [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. # [kubelet-check] It seems like the kubelet isn't running or healthy. # [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. # [apiclient] All control plane components are healthy after 212.227971 seconds # [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace # [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster # [upload-certs] Skipping phase. Please see --upload-certs # [mark-control-plane] Marking the node pimaster as control-plane by adding the label "node-role.kubernetes.io/master=''" # [mark-control-plane] Marking the node pimaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] # [bootstrap-token] Using token: rqyytr.f3pvvz2yepq51b7h # [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles # [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes # [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials # [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token # [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster # [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace # [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key # [addons] Applied essential addon: CoreDNS # [addons] Applied essential addon: kube-proxy # # Your Kubernetes control-plane has initialized successfully! # # To start using your cluster, you need to run the following as a regular user: # mkdir -p $HOME/.kube # sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # sudo chown $(id -u):$(id -g) $HOME/.kube/config # # You should now deploy a pod network to the cluster. # Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: # https://kubernetes.io/docs/concepts/cluster-administration/addons/ # # Then you can join any number of worker nodes by running the following on each as root: # # kubeadm join 192.168.12.101:6443 --token rqyytr.f3pvvz2yepq51b7h \ # --discovery-token-ca-cert-hash sha256:5b04368aaee148fa3eb8c6aed3c3bc8041248590e2055a81617d16d8f796bb77 # start using the cluster mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # check node status (something's missing - flannel) kubectl get nodes # NAME STATUS ROLES AGE VERSION # pimaster NotReady master 14m v1.19.4 # install flannel network plugin kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # podsecuritypolicy.policy/psp.flannel.unprivileged created # clusterrole.rbac.authorization.k8s.io/flannel created # clusterrolebinding.rbac.authorization.k8s.io/flannel created # serviceaccount/flannel created # configmap/kube-flannel-cfg created # daemonset.apps/kube-flannel-ds created # and now we're ready (check under STATUS) kubectl get nodes # NAME STATUS ROLES AGE VERSION # pimaster Ready master 17m v1.19.4 # add node to the cluster (pinode, 192.168.12.102) sudo kubeadm join 192.168.12.101:6443 --token rqyytr.f3pvvz2yepq51b7h --discovery-token-ca-cert-hash sha256:5b04368aaee148fa3eb8c6aed3c3bc8041248590e2055a81617d16d8f796bb77 # [preflight] Running pre-flight checks # [preflight] Reading configuration from the cluster... # [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' # [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" # [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" # [kubelet-start] Starting the kubelet # [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... # # This node has joined the cluster: # * Certificate signing request was sent to apiserver and a response was received. # * The Kubelet was informed of the new secure connection details. # # Run 'kubectl get nodes' on the control-plane to see this node join the cluster. # check nodes in the cluster (check the details... nice) kubectl get nodes -o wide # NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME # pimaster Ready master 18m v1.19.4 192.168.12.101 Raspbian GNU/Linux 10 (buster) 5.4.79-v7+ docker://18.9.1 # pinode Ready 6m15s v1.19.4 192.168.12.102 Ubuntu 20.04.1 LTS 5.4.0-56-generic docker://19.3.8 # test with some workload kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml # deployment.apps/redis-master created kubectl get pods # NAME READY STATUS RESTARTS AGE # redis-master-f46ff57fd-9m9g9 1/1 Running 0 37s kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml # service/redis-master created kubectl get service # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # kubernetes ClusterIP 10.96.0.1 443/TCP 22m # redis-master ClusterIP 10.100.245.207 6379/TCP 4s kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml # deployment.apps/redis-slave created kubectl get pods # NAME READY STATUS RESTARTS AGE # redis-master-f46ff57fd-9m9g9 1/1 Running 0 2m30s # redis-slave-bbc7f655d-dz895 1/1 Running 0 12s # redis-slave-bbc7f655d-slnhv 1/1 Running 0 12s kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml # service/redis-slave created kubectl get services # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # kubernetes ClusterIP 10.96.0.1 443/TCP 23m # redis-master ClusterIP 10.100.245.207 6379/TCP 91s # redis-slave ClusterIP 10.101.165.188 6379/TCP 13s kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml # deployment.apps/frontend created kubectl get pods -l app=guestbook -l tier=frontend # NAME READY STATUS RESTARTS AGE # frontend-6c6d6dfd4d-945hd 1/1 Running 0 7s # frontend-6c6d6dfd4d-mm7vm 1/1 Running 0 7s # frontend-6c6d6dfd4d-wllpx 1/1 Running 0 7s kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml # service/frontend created kubectl get services # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # frontend NodePort 10.101.140.217 80:31597/TCP 4s # kubernetes ClusterIP 10.96.0.1 443/TCP 24m # redis-master ClusterIP 10.100.245.207 6379/TCP 2m45s # redis-slave ClusterIP 10.101.165.188 6379/TCP 87s |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# cleanup kubectl delete deployment -l app=redis # deployment.apps "redis-master" deleted # deployment.apps "redis-slave" deleted kubectl delete service -l app=redis # service "redis-master" deleted # service "redis-slave" deleted kubectl delete deployment -l app=guestbook # deployment.apps "frontend" deleted kubectl delete service -l app=guestbook # service "frontend" deleted kubectl get pods # No resources found in default namespace. |
Cheers!
P.S. I’ve read about some having issues with flannel and using other network options (didn’t have this one). Also, if you’ll have issues with iptables (v1.8+), maybe you’ll need to switch to legacy version (didn’t have this one either).
When I get this thios stage, it fails:
# add node to the cluster (pinode, 192.168.12.102)
sudo kubeadm join 192.168.12.101:6443 –token rqyytr.f3pvvz2yepq51b7h –discovery-token-ca-cert-hash sha256:5b04368aaee148fa3eb8c6aed3c3bc8041248590e2055a81617d16d8f796bb77
sudo kubeadm join 10.55.0.248:6443 –token ******.******************* –discovery-token-ca-cert-hash sha256:******************************************************************
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable–etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable–etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with
--ignore-preflight-errors=...
To see the stack trace of this error execute with –v=5 or higher
pi@k3s-master:~ $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-master Ready control-plane,master 3m29s v1.23.5
Any suggestions thanks
Are you perhaps adding the same node to the cluster? It seems so – it seems you’re on a “pimaster” node and running “kubeadm join” command. You should be on the second node (“pinode”) and join that one to the cluster running on “pimaster”. Or I missed something? 🙂
That is what I was doing, I relised. Can I not add the master as a node to cluster? It seems to be a waste only doing the managment and nothing else.
No, it’s already “in the cluster” and running the “control plane” pods. If you want a single node cluster, you can eventually try to “untaint” the master node, thus allowing it to run “normal” workloads as well. Command should be something like “kubectl taint nodes –all node-role.kubernetes.io/master-“.
Tom