Note : I will upload a video with these steps. Check back soon.
Documentation source: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
Prerequisites
- Linux host
- Minimum 2GB RAM and 2 (v)CPUs
- Public or Private Network connectivity between nodes
- Unique hostname, MAC address, and product_uuid for every node.
Verify MAC address using :ip link
or ifconfig -a
. Check product_uuid using :sudo cat /sys/class/dmi/id/product_uuid
- Swap is supported since kubelet v1.22. As of v1.28 it is only supported for croup v2 : Check croup by running:
grep cgroup /proc/filesystems
. To be safe, turn off swap by running:sudo swapoff -a ; sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab
. - Open required ports as below:
src: https://kubernetes.io/docs/reference/networking/ports-and-protocols/
Control plane
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 6443 | Kubernetes API server | All |
TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 10259 | kube-scheduler | Self |
TCP | Inbound | 10257 | kube-controller-manager | Self |
Although etcd ports are included in the control plane section, you can also host your own etcd cluster externally or on custom ports.
Worker node(s)
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 30000-32767 | NodePort Services† | All |
Commands in a script:
!/bin/sh
product_uuid=$(sudo cat /sys/class/dmi/id/product_uuid)
sudo swapoff -a
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab
echo " The Output is:"
echo " UUID: $product_uuid\n"
echo " Hostname: $(hostname)"
Installing a container runtime – Containerd
Documentation source: https://kubernetes.io/docs/setup/production-environment/container-runtimes/
Container runtime is the application responsible for running containers in Kubernetes.
Installation prerequisites
Forwarding IPv4 and letting iptables see bridged traffic:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
#sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
#Apply sysctl params without reboot
sudo sysctl --system
Verify that the br_netfilter, overlay modules are loaded by running:
lsmod | grep br_netfilter
lsmod | grep overlay
Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and net.ipv4.ip_forward system variables are set to 1 in your sysctl config by running the following command:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
cgroupfs or systemd driver?
If your Linux system uses systemd as the init system, then use systemd, there is no need to use 2 different cgroup drivers.
Since v 1.22 kubeadm defaults the cgroup driver to systemd.
Installing Containerd
Documentaion source : https://github.com/containerd/containerd/blob/main/docs/getting-started.md
After installing Containerd, you will also install runc and CNI plugins.
Step 1: Installing containerd
Download the binary for your amd64 or arm64 system from https://github.com/containerd/containerd/releases. Ensure to download the latest releases from the provided link and switch it in place of the one below.
cd /tmp
Example for amd64
wget https://github.com/containerd/containerd/releases/download/v1.7.15/containerd-1.7.15-linux-amd64.tar.gz
Example for arm64
wget https://github.com/containerd/containerd/releases/download/v1.7.15/containerd-1.7.15-linux-arm64.tar.gz
Extract
tar Cxzvf /usr/local containerd-1.7.15-linux-amd64.tar.gz
Only for systemd system users.
sudo wget -O /usr/local/lib/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
Or if you get the error: No such file or directory
sudo wget -O /usr/lib/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
Reload service daemons and enable containerd
systemctl daemon-reload
systemctl enable --now containerd
Step 2: Installing runc
Download the right version for your system: amd64 or arm64 . Assets download page : https://github.com/opencontainers/runc/releases
Download runc
wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
#wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.arm64
install -m 755 runc.amd64 /usr/local/sbin/runc
Step 3: Installing CNI plugins
Download the cni-plugins—.tgz archive from https://github.com/containernetworking/plugins/releases , and extract it under /opt/cni/bin:
download for amd64
wget https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-amd64-v1.4.1.tgz
or download for arm
wget https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-arm-v1.4.1.tgz
Extract it
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.4.1.tgz
#tar Cxzvf /opt/cni/bin cni-plugins-linux-arm-v1.4.1.tgz
Containerd endpoint: unix:///var/run/containerd/containerd.sock
Install kubeadm, kubelet and kubectl
kubeadm: the tool used to bootstrap the Kubernetes cluster.
kubelet: the component that runs on all the machines in your cluster and does things like starting pods and containers.
kubectl: the command line utility used in talking to your cluster.
Update apt package index and install prerequisite apps
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gpg
Download the public signing key for the Kubernetes package repositories.
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Add apt repository for packages intended for v1.29
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
#install kubadm kubelet and kubectl and hold their versions to prevent the system from updating them.
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Enable the kubelet service before running kubeadm:
sudo systemctl enable --now kubelet
Creating a cluster with kubeadm
Source docs : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
To initialize the control-plane node, run:
sudo kubeadm init --apiserver-advertise-address=192.168.0.10 --pod-network-cidr=192.168.0.0/16
Change the pod network cidr and your control plane’s IP as needed. You may also leave out these options and the default values will be used.
Install CNI – Calico
Source documentation: https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises
Install the operator on your cluster.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/tigera-operator.yaml
Download the custom resources necessary to configure Calico.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/custom-resources.yaml -O
If you wish to customize the Calico install, customize the downloaded custom-resources.yaml manifest locally.
Create the manifest to install Calico.
kubectl create -f custom-resources.yaml
Verify Calico installation in your cluster.
watch kubectl get pods -n calico-system
Joining your nodes
SSH to the machine.
Become root (e.g. sudo su -)
Install a runtime if needed.
Run the command that was output by kubeadm init. For example:
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
If you do not have the token, you can get it by running the following command on the control-plane node:
kubeadm token list
Tokens expire after 24 hours.You can create a new token by running the following command on the control-plane node:
kubeadm token create
Kubeadm token create --print-join-command
I will upload a video with these steps. Check back soon.