Riguz留言 | 贡献
Riguz留言 | 贡献
 
(未显示同一用户的43个中间版本)
第1行: 第1行:
= Ubuntu 22.04 =
Latest:
 
* [[Ubuntu kubernetes installation]]
* [[Kubernetes storage class]]
* [[Monitor kubernetes]]
 
= Ubuntu 24.04 (Archived) =
==System preparation==
==System preparation==
<syntaxhighlight lang="python">
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=24.04
DISTRIB_CODENAME=noble
DISTRIB_DESCRIPTION="Ubuntu 24.04.2 LTS"
</syntaxhighlight>
===Updrage===
===Updrage===
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
sudo apt update
sudo apt update
sudo apt upgrade
sudo apt upgrade
do-release-update
do-release-upgrade
</syntaxhighlight>
</syntaxhighlight>


=== Mount data disk ===
=== Mount data disk (Optional) ===
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
mkfs.xfs /dev/vdb
mkfs.xfs /dev/vdb
第27行: 第40行:
<ref>https://kubernetes.io/docs/setup/production-environment/container-runtimes/#forwarding-ipv4-and-letting-iptables-see-bridged-traffic</ref>
<ref>https://kubernetes.io/docs/setup/production-environment/container-runtimes/#forwarding-ipv4-and-letting-iptables-see-bridged-traffic</ref>
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
hostnamectl set-hostname master.xx.com
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
EOF


第50行: 第51行:
Verify:
Verify:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
root@vm10-19-30-61:~# lsmod | grep br_netfilter
$ sysctl net.ipv4.ip_forward
br_netfilter          32768  0
bridge                307200  1 br_netfilter
root@vm10-19-30-61:~# lsmod | grep overlay
overlay              151552  0
root@vm10-19-30-61:~# sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.ipv4.ip_forward = 1
</syntaxhighlight>
</syntaxhighlight>
第124行: 第118行:
sudo systemctl restart containerd
sudo systemctl restart containerd
</syntaxhighlight>
</syntaxhighlight>
The above steps is required, otherwise might get error:
validate CRI v1 runtime API for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService[preflight


=== Install Kubeadm===
=== Install Kubeadm===
第129行: 第127行:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg


echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list


sudo apt-get update
sudo apt-get update
第175行: 第173行:
sudo systemctl enable kubelet
sudo systemctl enable kubelet


MASTER_IP="10.19.30.61"
MASTER_IP="83.229.126.124"
NODENAME=$(hostname -s)
NODENAME=$(hostname -s)
POD_CIDR="192.168.0.0/16"
POD_CIDR="192.168.0.0/16"
KUBERNETES_VERSION="v1.29.3"


kubeadm init \
sudo kubeadm init \
   --pod-network-cidr=$POD_CIDR \
   --pod-network-cidr=$POD_CIDR \
   --apiserver-advertise-address $MASTER_IP \
   --apiserver-advertise-address=$MASTER_IP \
  --control-plane-endpoint=$MASTER_IP \
   --node-name $NODENAME
   --node-name $NODENAME
</syntaxhighlight>
</syntaxhighlight>
第242行: 第240行:
export KUBECONFIG=/etc/kubernetes/admin.conf
export KUBECONFIG=/etc/kubernetes/admin.conf
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="bash">
$ kubectl get nodes
NAME    STATUS  ROLES          AGE    VERSION
riguz  Ready    control-plane  2m8s  v1.32.3
</syntaxhighlight>
=== Install CNI plugin ===
=== Install CNI plugin ===
<ref>https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart</ref>
<ref>https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart</ref>
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/custom-resources.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/custom-resources.yaml


watch kubectl get pods -n calico-system
watch kubectl get pods -n calico-system
NAME                                      READY  STATUS    RESTARTS  AGE
calico-kube-controllers-7bc6b5bb8-5dnsf  1/1    Running  0          26m
calico-node-jdw2v                        1/1    Running  0          26m
calico-typha-5c754949c6-qhfwz            1/1    Running  0          26m
csi-node-driver-9wmmd                    2/2    Running  0          26m
</syntaxhighlight>
By default, your cluster will not schedule Pods on the control plane nodes for security reasons. If you want to be able to schedule Pods on the control plane nodes:
<syntaxhighlight lang="bash">
kubectl taint nodes --all node-role.kubernetes.io/master-
</syntaxhighlight>
<syntaxhighlight lang="cpp">
$ kubectl get nodes -o wide
NAME    STATUS  ROLES          AGE  VERSION  INTERNAL-IP      EXTERNAL-IP  OS-IMAGE            KERNEL-VERSION    CONTAINER-RUNTIME
riguz  Ready    control-plane  13m  v1.32.3  83.229.126.124  <none>        Ubuntu 24.04.2 LTS  6.8.0-58-generic  containerd://1.7.27
</syntaxhighlight>
To dump images:
<syntaxhighlight lang="bash">
sudo ctr images pull docker.io/calico/apiserver:v3.27.3
sudo ctr images pull docker.io/calico/cni:v3.27.3
sudo ctr images pull docker.io/calico/csi:v3.27.3
sudo ctr images pull docker.io/calico/kube-controllers:v3.27.3
sudo ctr images pull docker.io/calico/node-driver-registrar:v3.27.3
sudo ctr images pull docker.io/calico/node:v3.27.3
sudo ctr images pull docker.io/calico/pod2daemon-flexvol:v3.27.3
sudo ctr images pull docker.io/calico/typha:v3.27.3
sudo ctr images pull quay.io/tigera/operator:v1.32.7
sudo ctr images export calico-3.27.3.tar docker.io/calico/apiserver:v3.27.3 docker.io/calico/cni:v3.27.3 docker.io/calico/csi:v3.27.3 docker.io/calico/kube-controllers:v3.27.3 docker.io/calico/node-driver-registrar:v3.27.3 docker.io/calico/node:v3.27.3 docker.io/calico/pod2daemon-flexvol:v3.27.3 docker.io/calico/typha:v3.27.3 quay.io/tigera/operator:v1.32.7
</syntaxhighlight>
===Join nodes===
<syntaxhighlight lang="bash">
kubeadm join 10.19.30.61:6443 --token xxx \
    --node-name node02 \
--discovery-token-ca-cert-hash xxx
</syntaxhighlight>
== HELM ==
<syntaxhighlight lang="bash">
wget https://get.helm.sh/helm-v3.14.4-linux-amd64.tar.gz
tar -zxvf helm-v3.14.4-linux-amd64.tar.gz
install linux-amd64/helm /usr/local/bin/helm
</syntaxhighlight>
== NFS ==
<syntaxhighlight lang="bash">
apt install nfs-common
# try to mount it
mount -t nfs -o vers=3,nolock,proto=tcp,noresvport 10.19.31.01:/cfs-xxx /mnt/tmpnfs
</syntaxhighlight>
<syntaxhighlight lang="bash">
sudo ctr images pull registry.k8s.io/sig-storage/nfsplugin:v4.6.0
sudo ctr images pull registry.k8s.io/sig-storage/csi-provisioner:v3.6.2
sudo ctr images pull registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2
sudo ctr images pull registry.k8s.io/sig-storage/livenessprobe:v2.11.0
sudo ctr images pull registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1
sudo ctr images pull registry.k8s.io/sig-storage/snapshot-controller:v6.3.2
sudo ctr images export nfs-csi-4.6.0.tar registry.k8s.io/sig-storage/nfsplugin:v4.6.0 registry.k8s.io/sig-storage/csi-provisioner:v3.6.2 registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2 registry.k8s.io/sig-storage/livenessprobe:v2.11.0 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1 registry.k8s.io/sig-storage/snapshot-controller:v6.3.2
wget https://github.com/kubernetes-csi/csi-driver-nfs/archive/refs/tags/v4.6.0.tar.gz
tar -zxvf v4.6.0.tar.gz
helm install csi-derver-nfs ./csi-driver-nfs -n kube-system
</syntaxhighlight>
Create a storage class:
<syntaxhighlight lang="bash">
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: *********
  share: /cfs-*****
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - nfsvers=3
  - nolock
  - proto=tcp
  - noresvport
</syntaxhighlight>
<syntaxhighlight lang="bash">
kubectl apply -f nfs-storageclass.yaml
kubectl get storageclass
NAME      PROVISIONER      RECLAIMPOLICY  VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION  AGE
nfs-csi  nfs.csi.k8s.io  Delete          Immediate          false                  5s
</syntaxhighlight>
Test use nfs:
<syntaxhighlight lang="bash">
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-dynamic
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs-csi
</syntaxhighlight>
<syntaxhighlight lang="bash">
kubectl get pvc
NAME              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  VOLUMEATTRIBUTESCLASS  AGE
pvc-nfs-dynamic  Bound    pvc-25aef236-bc5b-413a-9cb6-5616ad060f96  1Gi        RWX            nfs-csi        <unset>                7s
</syntaxhighlight>
Set nfs csi as default storage class:
<syntaxhighlight lang="bash">
kubectl patch storageclass nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
</syntaxhighlight>
= OpenEBS LocalPV (Advanced Local Storage)=
Best for: Single-node clusters requiring features like snapshots, backups, or multi-disk management<ref>https://openebs.io/docs/quickstart-guide/installation</ref>.
Why:
* Extends local storage with enterprise features.
* Supports ReadWriteOnce (RWO) and integrates with Velero for backups.
<syntaxhighlight lang="bash">
helm repo add openebs https://openebs.github.io/openebs
helm repo update
helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace
</syntaxhighlight>
Verify:
<syntaxhighlight lang="cpp">
$ helm ls -n openebs
NAME  NAMESPACE REVISION UPDATED                                STATUS  CHART        APP VERSION
openebs openebs  1      2025-04-18 15:01:20.055693605 +0000 UTCdeployed openebs-4.2.0 4.2.0
$ kubectl get pods -n openebs
NAME                                              READY  STATUS    RESTARTS  AGE
openebs-localpv-provisioner-699ddcb856-b5qn7      1/1    Running  0          60s
openebs-lvm-localpv-controller-86b4d6dcff-lj8f9  5/5    Running  0          60s
openebs-lvm-localpv-node-pvlhj                    2/2    Running  0          60s
openebs-zfs-localpv-controller-5b7846bf9-l4tg8    5/5    Running  0          60s
openebs-zfs-localpv-node-jp9x5                    2/2    Running  0          60s
$ kubectl get storageclass
NAME              PROVISIONER        RECLAIMPOLICY  VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION  AGE
openebs-hostpath  openebs.io/local  Delete          WaitForFirstConsumer  false                  86s
</syntaxhighlight>
set to default:
<syntaxhighlight lang="bash">
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
</syntaxhighlight>
= Metric server =
<syntaxhighlight lang="bash">
sudo ctr images pull registry.k8s.io/metrics-server/metrics-server:v0.7.1
sudo ctr images export metrics-0.7.1.tar registry.k8s.io/metrics-server/metrics-server:v0.7.1
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.1/components.yaml
# add
--kubelet-insecure-tls
kubectl apply -f components.yaml
</syntaxhighlight>
</syntaxhighlight>


=Dashboard=
<syntaxhighlight lang="bash">
sudo ctr images pull docker.io/kubernetesui/dashboard-auth:1.1.3
sudo ctr images pull docker.io/kubernetesui/dashboard-api:1.4.3
sudo ctr images pull docker.io/kubernetesui/dashboard-web:1.3.0
sudo ctr images pull docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1
# other images
# kong:3.6
sudo ctr images export dashboard-7.3.1.tar docker.io/kubernetesui/dashboard-auth:1.1.3 docker.io/kubernetesui/dashboard-api:1.4.3 docker.io/kubernetesui/dashboard-web:1.3.0 docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1wget
</syntaxhighlight>
<syntaxhighlight lang="bash">
wget https://github.com/kubernetes/dashboard/releases/download/kubernetes-dashboard-7.3.1/kubernetes-dashboard-7.3.1.tgz
tar -zxvf kubernetes-dashboard-7.3.1.tgz
helm upgrade --install kubernetes-dashboard ./kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
</syntaxhighlight>
= LoadBalancer =
<ref>https://metallb.universe.tf/installation/</ref>
<ref>https://www.lixueduan.com/posts/cloudnative/01-metallb/</ref>
<ref>https://metallb.io/installation/</ref>
<syntaxhighlight lang="bash">
kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml
kubectl get pods -n metallb-system -o wide
NAME                        READY  STATUS    RESTARTS  AGE  IP                NODE    NOMINATED NODE  READINESS GATES
controller-756c6b677-ggl74  1/1    Running  0          12m  192.168.196.152  node01  <none>          <none>
speaker-5m9p7                1/1    Running  0          12m  10.19.30.61      master  <none>          <none>
speaker-pmw7p                1/1    Running  0          12m  10.19.30.13      node02  <none>          <none>
speaker-tmst5                1/1    Running  0          12m  10.19.30.64      node01  <none>          <none>
</syntaxhighlight>
Confugre ip address pool:
<syntaxhighlight lang="yaml">
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.20.0.0/24
</syntaxhighlight>
Congure L2 mode:
<syntaxhighlight lang="yaml">
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
</syntaxhighlight>
= Traefik Ingress=
<ref>https://platform9.com/learn/v1.0/tutorials/traefik-ingress</ref>
<syntaxhighlight lang="bash">
helm repo add traefik https://helm.traefik.io/traefik
helm install traefik traefik/traefik -n kube-system
helm install traefik traefik/traefik -n kube-system
kubectl get ingressclass
NAME      CONTROLLER                      PARAMETERS  AGE
traefik  traefik.io/ingress-controller  <none>      45s
</syntaxhighlight>
= (New) Traefix IngressRoute =
<ref>https://doc.traefik.io/traefik/providers/kubernetes-crd/</ref>
<syntaxhighlight lang="bash">
# Install Traefik Resource Definitions:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.1/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
# Install RBAC for Traefik:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.1/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
</syntaxhighlight>
=Nginx Ingress=
<ref>https://kubernetes.github.io/ingress-nginx/deploy/#quick-start</ref>
<syntaxhighlight lang="bash">
helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace
</syntaxhighlight>


[[Category:Linux/Unix]]
[[Category:Linux/Unix]]
[[Category:Kubernetes]]
[[Category:Kubernetes]]

2025年4月27日 (日) 14:23的最新版本

Latest:

Ubuntu 24.04 (Archived)

System preparation

$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=24.04
DISTRIB_CODENAME=noble
DISTRIB_DESCRIPTION="Ubuntu 24.04.2 LTS"

Updrage

sudo apt update
sudo apt upgrade
do-release-upgrade

Mount data disk (Optional)

mkfs.xfs /dev/vdb
lsof /var
mv /var/ /var0
mkdir /mnt/newvar/
mount /dev/vdb /mnt/newvar/
rsync -aqxP /var0/* /mnt/newvar/
umount /mnt/newvar
mkdir /var
mount /dev/vdb /var

vim /etc/fstab
# /dev/vdb /var xfs  defaults 0 0

System configuration

[1]

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

Verify:

$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

Disable swap

# check if swap is disabled
swapon -s

Install Kubernetes

Containerd runtime

[2] [3]

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc



# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Use mirror instead:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

sudo usermod -aG docker $USER
sudo systemctl enable docker.service
sudo systemctl enable containerd.service

Generate containerd config using systemd:

sudo containerd config default | sudo tee /etc/containerd/config.toml

And modify it:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"

Restart the service:

sudo systemctl restart containerd

The above steps is required, otherwise might get error:

validate CRI v1 runtime API for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService[preflight

Install Kubeadm

[4]

sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

(Optional) Configure docker/containerd proxy

proxy server:

docker:[5] [6]

sudo mkdir -p /etc/systemd/system/docker.service.d
vim /etc/systemd/system/docker.service.d/http-proxy.conf

# Must configure HTTPS_PROXY
[Service]
Environment="HTTP_PROXY=http://user:password@riguz.com:8080/"
Environment="HTTPS_PROXY=http://user:password@riguz.com:8080/"

Also create /etc/systemd/system/containerd.service.d/http-proxy.conf with same content. Must restart the servecie:

sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart containerd

systemctl show --property=Environment docker

(Master) Create cluster

[7]

sudo systemctl start kubelet
sudo systemctl enable kubelet

MASTER_IP="83.229.126.124"
NODENAME=$(hostname -s)
POD_CIDR="192.168.0.0/16"

sudo kubeadm init \
  --pod-network-cidr=$POD_CIDR \
  --apiserver-advertise-address=$MASTER_IP \
  --control-plane-endpoint=$MASTER_IP \
  --node-name $NODENAME

It takes long time to pull images, so we can pull images first:

kubeadm config images list
kubeadm config images pull

to view images in local host:[8]

ctr -n k8s.io images list

(Optional) Dump and import images

kubeadm config images list

# download images in a server:
sudo ctr images pull registry.k8s.io/kube-apiserver:v1.29.3
sudo ctr images pull registry.k8s.io/kube-controller-manager:v1.29.3
sudo ctr images pull registry.k8s.io/kube-scheduler:v1.29.3
sudo ctr images pull registry.k8s.io/kube-proxy:v1.29.3
sudo ctr images pull registry.k8s.io/coredns/coredns:v1.11.1
sudo ctr images pull registry.k8s.io/pause:3.9
sudo ctr images pull registry.k8s.io/etcd:3.5.12-0

# export
sudo ctr images export kubeadm-1.29.3-images.tar registry.k8s.io/kube-apiserver:v1.29.3 registry.k8s.io/kube-controller-manager:v1.29.3 registry.k8s.io/kube-scheduler:v1.29.3 registry.k8s.io/kube-proxy:v1.29.3 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0

# import
ctr -n k8s.io images import kubeadm-1.29.3-images.tar

# Verify all images has been pulled:
kubeadm config images pull
W0410 16:58:19.699693  323494 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": dial tcp 151.101.89.55:443: i/o timeout (Client.Timeout exceeded while awaiting headers)
W0410 16:58:19.699757  323494 version.go:105] falling back to the local client version: v1.29.3
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.29.3
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.29.3
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.29.3
[config/images] Pulled registry.k8s.io/kube-proxy:v1.29.3
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.1
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.12-0

Generate config

# non-root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# root
export KUBECONFIG=/etc/kubernetes/admin.conf
$ kubectl get nodes
NAME    STATUS   ROLES           AGE    VERSION
riguz   Ready    control-plane   2m8s   v1.32.3

Install CNI plugin

[9]

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/custom-resources.yaml

watch kubectl get pods -n calico-system

NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7bc6b5bb8-5dnsf   1/1     Running   0          26m
calico-node-jdw2v                         1/1     Running   0          26m
calico-typha-5c754949c6-qhfwz             1/1     Running   0          26m
csi-node-driver-9wmmd                     2/2     Running   0          26m

By default, your cluster will not schedule Pods on the control plane nodes for security reasons. If you want to be able to schedule Pods on the control plane nodes:

kubectl taint nodes --all node-role.kubernetes.io/master-
$ kubectl get nodes -o wide
NAME    STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
riguz   Ready    control-plane   13m   v1.32.3   83.229.126.124   <none>        Ubuntu 24.04.2 LTS   6.8.0-58-generic   containerd://1.7.27

To dump images:

sudo ctr images pull docker.io/calico/apiserver:v3.27.3
sudo ctr images pull docker.io/calico/cni:v3.27.3
sudo ctr images pull docker.io/calico/csi:v3.27.3
sudo ctr images pull docker.io/calico/kube-controllers:v3.27.3
sudo ctr images pull docker.io/calico/node-driver-registrar:v3.27.3
sudo ctr images pull docker.io/calico/node:v3.27.3
sudo ctr images pull docker.io/calico/pod2daemon-flexvol:v3.27.3
sudo ctr images pull docker.io/calico/typha:v3.27.3
sudo ctr images pull quay.io/tigera/operator:v1.32.7

sudo ctr images export calico-3.27.3.tar docker.io/calico/apiserver:v3.27.3 docker.io/calico/cni:v3.27.3 docker.io/calico/csi:v3.27.3 docker.io/calico/kube-controllers:v3.27.3 docker.io/calico/node-driver-registrar:v3.27.3 docker.io/calico/node:v3.27.3 docker.io/calico/pod2daemon-flexvol:v3.27.3 docker.io/calico/typha:v3.27.3 quay.io/tigera/operator:v1.32.7

Join nodes

kubeadm join 10.19.30.61:6443 --token xxx \
    --node-name node02 \
	--discovery-token-ca-cert-hash xxx

HELM

wget https://get.helm.sh/helm-v3.14.4-linux-amd64.tar.gz
tar -zxvf helm-v3.14.4-linux-amd64.tar.gz
install linux-amd64/helm /usr/local/bin/helm

NFS

apt install nfs-common

# try to mount it
mount -t nfs -o vers=3,nolock,proto=tcp,noresvport 10.19.31.01:/cfs-xxx /mnt/tmpnfs
sudo ctr images pull registry.k8s.io/sig-storage/nfsplugin:v4.6.0
sudo ctr images pull registry.k8s.io/sig-storage/csi-provisioner:v3.6.2
sudo ctr images pull registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2
sudo ctr images pull registry.k8s.io/sig-storage/livenessprobe:v2.11.0
sudo ctr images pull registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1
sudo ctr images pull registry.k8s.io/sig-storage/snapshot-controller:v6.3.2

sudo ctr images export nfs-csi-4.6.0.tar registry.k8s.io/sig-storage/nfsplugin:v4.6.0 registry.k8s.io/sig-storage/csi-provisioner:v3.6.2 registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2 registry.k8s.io/sig-storage/livenessprobe:v2.11.0 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1 registry.k8s.io/sig-storage/snapshot-controller:v6.3.2

wget https://github.com/kubernetes-csi/csi-driver-nfs/archive/refs/tags/v4.6.0.tar.gz
tar -zxvf v4.6.0.tar.gz
helm install csi-derver-nfs ./csi-driver-nfs -n kube-system

Create a storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: *********
  share: /cfs-*****
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - nfsvers=3
  - nolock
  - proto=tcp
  - noresvport
kubectl apply -f nfs-storageclass.yaml
kubectl get storageclass
NAME      PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi   nfs.csi.k8s.io   Delete          Immediate           false                  5s

Test use nfs:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-dynamic
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs-csi
kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-nfs-dynamic   Bound    pvc-25aef236-bc5b-413a-9cb6-5616ad060f96   1Gi        RWX            nfs-csi        <unset>                 7s

Set nfs csi as default storage class:

kubectl patch storageclass nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

OpenEBS LocalPV (Advanced Local Storage)

Best for: Single-node clusters requiring features like snapshots, backups, or multi-disk management[10]. Why:

  • Extends local storage with enterprise features.
  • Supports ReadWriteOnce (RWO) and integrates with Velero for backups.
helm repo add openebs https://openebs.github.io/openebs
helm repo update

helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace

Verify:

$ helm ls -n openebs
NAME   	NAMESPACE	REVISION	UPDATED                                STATUS  	CHART        	APP VERSION
openebs	openebs  	1       	2025-04-18 15:01:20.055693605 +0000 UTCdeployed	openebs-4.2.0	4.2.0

$ kubectl get pods -n openebs
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-localpv-provisioner-699ddcb856-b5qn7      1/1     Running   0          60s
openebs-lvm-localpv-controller-86b4d6dcff-lj8f9   5/5     Running   0          60s
openebs-lvm-localpv-node-pvlhj                    2/2     Running   0          60s
openebs-zfs-localpv-controller-5b7846bf9-l4tg8    5/5     Running   0          60s
openebs-zfs-localpv-node-jp9x5                    2/2     Running   0          60s

$ kubectl get storageclass
NAME               PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-hostpath   openebs.io/local   Delete          WaitForFirstConsumer   false                  86s

set to default:

kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Metric server

sudo ctr images pull registry.k8s.io/metrics-server/metrics-server:v0.7.1
sudo ctr images export metrics-0.7.1.tar registry.k8s.io/metrics-server/metrics-server:v0.7.1

wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.1/components.yaml

# add
--kubelet-insecure-tls

kubectl apply -f components.yaml

Dashboard

sudo ctr images pull docker.io/kubernetesui/dashboard-auth:1.1.3
sudo ctr images pull docker.io/kubernetesui/dashboard-api:1.4.3
sudo ctr images pull docker.io/kubernetesui/dashboard-web:1.3.0
sudo ctr images pull docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1

# other images
# kong:3.6

sudo ctr images export dashboard-7.3.1.tar docker.io/kubernetesui/dashboard-auth:1.1.3 docker.io/kubernetesui/dashboard-api:1.4.3 docker.io/kubernetesui/dashboard-web:1.3.0 docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1wget
wget https://github.com/kubernetes/dashboard/releases/download/kubernetes-dashboard-7.3.1/kubernetes-dashboard-7.3.1.tgz
tar -zxvf kubernetes-dashboard-7.3.1.tgz

helm upgrade --install kubernetes-dashboard ./kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

LoadBalancer

[11] [12] [13]

kubectl edit configmap -n kube-system kube-proxy

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml
kubectl get pods -n metallb-system -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP                NODE     NOMINATED NODE   READINESS GATES
controller-756c6b677-ggl74   1/1     Running   0          12m   192.168.196.152   node01   <none>           <none>
speaker-5m9p7                1/1     Running   0          12m   10.19.30.61       master   <none>           <none>
speaker-pmw7p                1/1     Running   0          12m   10.19.30.13       node02   <none>           <none>
speaker-tmst5                1/1     Running   0          12m   10.19.30.64       node01   <none>           <none>

Confugre ip address pool:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.20.0.0/24

Congure L2 mode:

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

Traefik Ingress

[14]

helm repo add traefik https://helm.traefik.io/traefik
helm install traefik traefik/traefik -n kube-system
helm install traefik traefik/traefik -n kube-system

kubectl get ingressclass
NAME      CONTROLLER                      PARAMETERS   AGE
traefik   traefik.io/ingress-controller   <none>       45s

(New) Traefix IngressRoute

[15]

# Install Traefik Resource Definitions:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.1/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml

# Install RBAC for Traefik:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.1/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml

Nginx Ingress

[16]

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace