RKE2

Purpose

The purpose of this document is to describe steps to deploy the RKE2 Kubernetes Distribution in High Availability with Kube-VIP.

Kube-VIP Requirements

A VIP is a virtual IP Address that remains available and traverses between all the Control-Plane nodes seamlessly with 1 Control-Plane node active to Kube-VIP. Kube-VIP works exactly as keepalive except that it has some additional flexibilities to configure depending upon the environment for example Kube-VIP can work using

  • ARP – When using ARP or Layer 2 it will use leader election.
  • BGP – BGP is a mechanism so that networks that rely on routing (layer 3) can ensure that new addresses are advertised to the routing infrastructure
  • Routing Tables – The Routing Table mode is to allow additional routing technologies such as ECMP etc. The table mode enables kube-vip to manage the addition/deletion of addresses to the routing tables of these nodes so that they can recieve the correct traffic.

For RKE2 cluster setup, we will be using ARP mode, which requires

  • the VIP is transferable between any of the Control-Plane Nodes i-e same subnet as of the control-Plane node. ARP will typically broadcast updates to the entire network that update the IP to Hardware (MAC) mapping this ensures traffic is sent to the correct physical or virtual network nic.
  • ARP traffic is open in the network

In our case we will use the control-plane (server) nodes to deploye different services. For a lighter loaded cluster Control-Plane Nodes can also be part of the work-load along with their Worker nodes.

Installation Steps (do it as root)

Step 1: Prepare First Control Plane

mkdir -p /etc/rancher/rke2/
mkdir -p  /var/lib/rancher/rke2/server/manifests/
cat<<EOF|tee /etc/rancher/rke2/config.yaml
tls-san:
  - rke201-slyshu.bokors.net
  - 192.168.103.130
  - rkes01-slyshu.bokors.net
  - 192.168.103.131
  - rkes02-slyshu.bokors.net
  - 192.168.103.132
  - rkes03-slyshu.bokors.net
  - 192.168.103.133
write-kubeconfig-mode: "0600"
etcd-expose-metrics: true
cni:
  - canal

EOF

Step 2: Ingress-Nginx config for RKE2

  1. By default RKE-2 based ingress controller doesn’t allow additional snippet information in ingress manifests. Create this config before starting the deployment of RKE2.

/var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml

---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      metrics:
        service:
          annotations:
            prometheus.io/scrape: "true"
            prometheus.io/port: "10254"
      config:
        use-forwarded-headers: "true"
      allowSnippetAnnotations: "true"
  1. Begin the RKE2 Deployment
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
  1. Start the RKE2 service. Starting the Service will take approx. 10-15 minutes based on the network connection.
systemctl start rke2-server

Step 3: Enable the RKE2 Service

  1. Enable RKE2 service.
systemctl enable rke2-server
  1. By default RKE2 deploys all the binaries in /var/lib/rancher/rke2/bin path. Add this path to system’s default PATH for kubectl utility to work appropriately.
export PATH=$PATH:/var/lib/rancher/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
  1. Also, append these lines into current user’s .bashrc file.
echo "export PATH=$PATH:/var/lib/rancher/rke2/bin" >> $HOME/.bashrc
echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml"  >> $HOME/.bashrc 
  1. Get the token for joining other Control-Plane Nodes.
cat /var/lib/rancher/rke2/server/node-token

Step 4: Deploy Kube-VIP

Do it as root, on Ubuntu sudo su

  1. Decide the IP and the interface on all nodes for Kube-VIP and setup these as environment variables. This step must be completed before deploying any other node in the cluster (both CP and Workers).
export VIP=192.168.103.130
export INTERFACE=ens18
  1. Import the RBAC manifest for Kube-VIP
curl https://kube-vip.io/manifests/rbac.yaml > /var/lib/rancher/rke2/server/manifests/kube-vip-rbac.yaml
  1. Fetch the kube-vip image
/var/lib/rancher/rke2/bin/crictl -r "unix:///run/k3s/containerd/containerd.sock"  pull ghcr.io/kube-vip/kube-vip:latest
  1. Deploy the Kube-VIP
CONTAINERD_ADDRESS=/run/k3s/containerd/containerd.sock  ctr -n k8s.io run \
--rm \
--net-host \
ghcr.io/kube-vip/kube-vip:latest vip /kube-vip manifest daemonset --arp --interface $INTERFACE --address $VIP --controlplane  --leaderElection --taint --services --inCluster | tee /var/lib/rancher/rke2/server/manifests/kube-vip.yaml
  1. Wait for the kube-vip to complete bootstrapping (run as root)
kubectl rollout status daemonset   kube-vip-ds    -n kube-system   --timeout=650s
  1. Once the condition is met, you can check the daemonset by kube-vip is running 1 pod
kubectl  get ds -n kube-system  kube-vip-ds

Step 5: Remaining Control-Plane nodes (do it as root)

  1. Create required directories for RKE2 configurations.
mkdir -p /etc/rancher/rke2/
mkdir -p  /var/lib/rancher/rke2/server/manifests/
  1. Create a deployment manifest called config.yaml for RKE2 Cluster and replace the IP addresses and corresponding FQDNS according (add any other fields from the Extra Options sections in config.yaml at this point).
cat<<EOF|tee /etc/rancher/rke2/config.yaml
server: https://192.168.103.130:9345
token: K10edbcac3dc0d00c7e0c96edb18c3f6ab551a0346e9e50532d7c890a78708745ff::server:3a6b102ca174a62053632af68e94956e
tls-san:
  - rke201-slyshu.bokors.net
  - 192.168.103.130
  - rkes01-slyshu.bokors.net
  - 192.168.103.131
  - rkes02-slyshu.bokors.net
  - 192.168.103.132
  - rkes03-slyshu.bokors.net
  - 192.168.103.133
write-kubeconfig-mode: "0644"
etcd-expose-metrics: true
cni:
  - canal

EOF

Ingress-Nginx config for RKE2 By default RKE-2 based ingress controller doesn’t allow additional snippet information in ingress manifests, create this config before starting the deployment of RKE2

cat<<EOF| tee /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      metrics:
        service:
          annotations:
            prometheus.io/scrape: "true"
            prometheus.io/port: "10254"
      config:
        use-forwarded-headers: "true"
      allowSnippetAnnotations: "true"
EOF

Step 6: Begin the RKE2 Deployment

  1. Begin the RKE2 Deployment
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
  1. Start the RKE2 service. Starting the Service will take approx. 10-15 minutes based on the network connection
systemctl start rke2-server
  1. Enable the RKE2 Service
systemctl enable rke2-server

Step 7: Bash Completion for kubectl

  1. Install bash-completion package
apt install bash-completion -y
  1. Setup autocomplete in bash into the current shell, bash-completion package should be installed first.
source <(kubectl completion bash) 
echo "source <(kubectl completion bash)" >> ~/.bashrc
  1. Also, add alias for short notation of kubectl
echo "alias k=kubectl"  >> ~/.bashrc 
echo "complete -o default -F __start_kubectl k"  >> ~/.bashrc 
  1. Source your ~/.bashrc
source ~/.bashrc

Step 8: Install helm

  1. Helm is a nifty tool to deploy external components. To install helm on cluster, execute the following command.
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3|bash
  1. Enable bash completion for Helm
helm completion bash > /etc/bash_completion.d/helm
  1. and relogin to enable the bash completion or do su - if running as root user

  2. List the cluster nodes’ details. You can get the details of all nodes using the following command:

kubectl get nodes -o wide

Step 9: Install Rancher and Cert-Manager

1.: Add Rancher Helm Repo & create namespace

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system

2.: Install Cert-Manager

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.2/cert-manager.crds.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.13.2
kubectl get pods --namespace cert-manager

3.: Install Rancher

helm install rancher rancher-latest/rancher \
 --namespace cattle-system \
 --set hostname=rancher.bokors.net \
 --set bootstrapPassword=admin
kubectl -n cattle-system rollout status deploy/rancher
kubectl -n cattle-system get deploy rancher
Last updated on