Thanks to the following source medium
Kubernetes Install #
this config is tested on Ubuntu 24.04.3 LTS
Global Config #
Apply updates and reboot.
sudo apt update
sudo apt upgrade
sudo reboot
Add settings to containerd.conf
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Add settings to kubernetes.conf Allow IPv4, IPv6 and IP forwarding
sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
Reload updated config
sudo sysctl --system
Install required tools and CA certificates
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates nano
Add Docker repository
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Then, install containerd
sudo apt update
sudo apt install -y containerd.io
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
a fix was needed github
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Install kubectl, kubeadmin and kublet:
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
modify ‘/etc/hosts’ file to add hostnames of each node.
sudo nano /etc/hosts
Now, add the local IP addresses of all the computers that will be part of the cluster to the /etc/hosts file, and then save it. (Note: The following IP addresses are examples, the actual hostnames will vary depending on your environment.
192.168.1.150 master.local master
192.168.1.151 worker1.local worker1
192.168.1.152 worker2.local worker2
192.168.1.153 worker3.local worker3
Install Docker Community Edition
sudo apt-get install docker-ce
Open TCP port for K8s API communication (default 6443)
# Open TCP port for K8s API (default 6443)
sudo iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
Linux nodes support swap; you need to configure each node to enable it. By default, the kubelet will not start on a Linux node that has swap enabled.
# To diable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
open /etc/fstab file to check swap is commented out
sudo nano /etc/fstab
Finally, reboot, and double check swap is gone.
sudo reboot
after reboot, check the swap is gone
free -h
Master Setup #
Now, it’s time for setting up Master Node. This section will take a bit to run,
# Set hostname for each machine
sudo hostnamectl set-hostname "master.local"
exec bash
only for master node
sudo kubeadm config images pull
sudo reboot
Update control plane endpoint with your hostname or use master.local
# note: '--ignore-preflight-errors=all' is added
# due to initialization stops with some minor errors
sudo kubeadm init --control-plane-endpoint=master.local --ignore-preflight-errors=all
# Copy /etc/kubernetes/admin.conf for using the node
# as a Non-root user
# Create .kube/config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Install Calico Network Plugin
# Currently (v3.25.0) is the latest, check the latest first for yours
# Check here https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises
curl https://raw.githubusercontent.com/projectcalico/calico/v3.31.3/manifests/calico.yaml -O
kubectl apply -f calico.yaml
# Comfirm control plane node is in a ready state.
kubectl get nodes
Metric Add-on #
Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
kubectl apply -f kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl patch deployment metrics-server -n kube-system --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'
kubectl get apiservice | grep metrics
CNI Install #
Now you need to install a CNI addon. Choose one, for this guide I included the step for either Cilium or Calico.
Cilium #
sudo apt-get install curl gpg apt-transport-https --yes
curl -fsSL https://packages.buildkite.com/helm-linux/helm-debian/gpgkey | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/helm.gpg] https://packages.buildkite.com/helm-linux/helm-debian/any/ any main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
{
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
}
cilium install --version 1.18.6
kubectl get pods -A
cilium status --wait
cilium hubble enable
cilium status
{
HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
HUBBLE_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then HUBBLE_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-${HUBBLE_ARCH}.tar.gz.sha256sum
sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
}
hubble status -P
cilium hubble port-forward&
hubble observe
cilium hubble enable --ui
Calico #
Download calicoctl
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.31.0/manifests/calico-typha.yaml
kubectl rollout status -n kube-system ds/calico-node
Kubernetes Worker Node Install #
Connect the Master Node with Worker Nodes #
Then pull the token from the control plane node.
sudo kubeadm token create --print-join-command
Connecting worker nodes to the master node is a straightforward process. To connect a computer that will serve as a worker node, log in to that computer (either via SSH or directly) and perform the from the sudo kubeadm token create --print-join-command command.
# '--ignore-preflight-errors=all' used to bypass on minor errors
sudo kubeadm join master.local:6443 --token ...REDACTED... --discovery-token-ca-cert-hash sha256:...REDACTED... --ignore-preflight-errors=all
# reboot
sudo reboot
After rebooting each worker nodes, verify on the master node that the connection has been established correctly.
Troubleshooting #
# node Information
kubectl get nodes
kubectl describe node <node name>
# Pod information
kubectl get pods -n kube-system -o wide
kubectl describe pod <pod-name> -n kube-system
# pull logs
kubectl get pods -n kube-system -l k8s-app=calico-node
kubectl logs <calico-node-pod-name> -n kube-system
kubectl describe pod <calico-node-pod-name> -n kube-system