BlogHome

Cluster Triple - k3s setup

2019-09-09

In my previous post, I did the hardware setup. Next up, I'm going to get Kubernetes running!

Goals

  • Run CBRIDGE controller Pi as the Kubernetes master node
  • Join the three CM devices in the Cluster Triple as nodes in the cluster
  • Verify working behaviour of Kubernetes, with run of a simple command in a pod in the cluster
  • Access the cluster from my local laptop device.

Kubernetes Distribution

This year, Rancher Labs released a new project, called k3s which is a lightweight, but still certified distribution of Kubernetes, that is tailored for IoT, ARM, CI, etc uses where stock Kubernetes isn't as suitable.

I haven't played with k3s yet, so this is a perfect opportunity to do so!

Control Node Install

First, we'll run the standard k3s install script on the controller Pi. This will get our controller Pi set up as a Kubernetes master, and leave the CM devices as worker nodes eventually:

pi@cbridge$ curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.8.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.8.1/sha256sum-arm.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.8.1/k3s-armhf
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Once completed, you can inspect the status of things:

pi@cbridge$ sudo kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
cbridge   Ready    master   91s   v1.14.6-k3s.1

Joining CM Nodes

In order to join your CM nodes to the cluster, you'll need two pieces of info:

  1. The IP address of your cbridge controller node. Alternately, you could try using the mDNS local name, but I have not tested this.
  2. The secret token that can be used to join the cluster. This can be found in /var/lib/rancher/k3s/server/node-token on the controller.

With that information on hand, you can run this for your first node:

pi@p5$ curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.10:6443 K3S_TOKEN="SOME_SECRET_TOKEN_VALUE:node:123" sh -
[INFO]  Finding latest release
[INFO]  Using v0.8.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.8.1/sha256sum-arm.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.8.1/k3s-armhf
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

Checking the status on things from the control node:

pi@cbridge$ sudo kubectl get nodes
NAME      STATUS     ROLES    AGE   VERSION
cbridge   Ready      master   24h   v1.14.6-k3s.1
p5        NotReady   worker   1s    v1.14.6-k3s.1

You can see the worker node has joined the cluster, but isn't yet ready. Waiting a bit longer, the node should be fully ready:

pi@cbridge$ sudo kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
cbridge   Ready    master   24h     v1.14.6-k3s.1
p5        Ready    worker   2m19s   v1.14.6-k3s.1

I can then repeat the same thing on the other CM nodes, to finally have the basic Kubernetes cluster up and running:

pi@cbridge$ sudo kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
cbridge   Ready    master   24h     v1.14.6-k3s.1
p5        Ready    worker   5m27s   v1.14.6-k3s.1
p6        Ready    worker   30s     v1.14.6-k3s.1
p7        Ready    worker   5s      v1.14.6-k3s.1

Running test pod

pi@cbridge$ sudo kubectl run -i --tty alpine --image=alpine -- /bin/ash
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ #

Before exiting the prompt, I verified where the pod was running:

pi@cbridge$ sudo kubectl describe pod alpine | grep "^Node:"
Node:               p5/192.168.1.208

Accessing the cluster

If you want, you can continue to interact with the cluster by first SSH-ing into the cbridge control node. However, often you'll want to interact with the cluster from your local machine. By default, the installed configuration file is only accessible by root on the controller node, so you'll need to cat the content of the file w/ sudo, then paste into a local file:

pi@cbridge$ sudo cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:
    NOTFORYOU==
    server: https://localhost:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: OMITTED
    username: admin

Openning an editor on your local machine to open ~/.kube/pi-cluster-config and paste in the content. Once pasted, edit the server line to change localhost to the IP of your cluster, e.g. server: https://192.168.1.10:6443. k3s by default names all the cluster, user, etc information as default, which isn't very helpful locally, so you may also want to replace all instances of default with pi-cluste in the file as well.

To make sure you pick up any existing Kubernetes configuration, as well as the new one, you'll want to update your $KUBECONFIG variable. If you already have one set, you can do:

export KUBECONFIG=$KUBECONFIG:~/.kube/pi-cluster-config

If you don't have any existing value for $KUBECONFIG, you can do:

export KUBECONFIG=$HOME/.kube/config:$HOME/.kube/pi-cluster-config

Once $KUBECONFIG is set, you can view the merged config:

kubectl config view

Finally, you can confirm you can access the cluster by describing the master cbridge node:

$ kubectl describe node cbridge
Name:               cbridge
Roles:              master
Labels:             beta.kubernetes.io/arch=arm
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm
                    kubernetes.io/hostname=cbridge
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=true
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"0a:f8:e2:7e:94:5f"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.1.10
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 07 Sep 2019 23:05:21 -0400
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 09 Sep 2019 10:03:52 -0400   Sat, 07 Sep 2019 23:05:20 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 09 Sep 2019 10:03:52 -0400   Sat, 07 Sep 2019 23:05:20 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 09 Sep 2019 10:03:52 -0400   Sat, 07 Sep 2019 23:05:20 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Mon, 09 Sep 2019 10:03:52 -0400   Sat, 07 Sep 2019 23:05:32 -0400   KubeletReady                 kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Addresses:
  InternalIP:  192.168.1.10
  Hostname:    cbridge
Capacity:
 cpu:                4
 ephemeral-storage:  15025172Ki
 memory:             948308Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  14616487311
 memory:             948308Ki
 pods:               110
System Info:
 Machine ID:                 3390eb2107964e7d9da093e8c1d17922
 System UUID:                3390eb2107964e7d9da093e8c1d17922
 Boot ID:                    2a3b0c78-a1cd-4a6f-9381-a2faa199e3e8
 Kernel Version:             4.19.66-v7+
 OS Image:                   Raspbian GNU/Linux 10 (buster)
 Operating System:           linux
 Architecture:               arm
 Container Runtime Version:  containerd://1.2.7-k3s1
 Kubelet Version:            v1.14.6-k3s.1
 Kube-Proxy Version:         v1.14.6-k3s.1
PodCIDR:                     10.42.0.0/24
Non-terminated Pods:         (3 in total)
  Namespace                  Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                        ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-b7464766c-pbbr6     100m (2%)     0 (0%)      70Mi (7%)        170Mi (18%)    34h
  kube-system                svclb-traefik-plckk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         34h
  kube-system                traefik-5c79b789c5-m2f77    0 (0%)        0 (0%)      0 (0%)           0 (0%)         34h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (2%)  0 (0%)
  memory             70Mi (7%)  170Mi (18%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:              <none>

If you're happy, make sure to add the export for $KUBECONFIG to your ~/.bashrc so it is automatic in the future.

Next Steps

Next, I'll work in setting up things like Kubernetes Dashboard, Prometheus, etc!