Kubernetes installation:修订间差异
(未显示同一用户的53个中间版本) | |||
第34行: | 第34行: | ||
EOF | EOF | ||
sudo modprobe overlay | # seems no longer required in 1.31 | ||
sudo modprobe br_netfilter | # sudo modprobe overlay | ||
# sudo modprobe br_netfilter | |||
# sysctl params required by setup, params persist across reboots | # sysctl params required by setup, params persist across reboots | ||
第70行: | 第71行: | ||
=== Containerd runtime=== | === Containerd runtime=== | ||
<ref>https://docs.docker.com/engine/install/ubuntu/</ref> | <ref>https://docs.docker.com/engine/install/ubuntu/</ref> | ||
<ref>https://mirrors.tuna.tsinghua.edu.cn/help/docker-ce/</ref> | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
# Add Docker's official GPG key: | # Add Docker's official GPG key: | ||
第77行: | 第80行: | ||
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc | sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc | ||
sudo chmod a+r /etc/apt/keyrings/docker.asc | sudo chmod a+r /etc/apt/keyrings/docker.asc | ||
# Add the repository to Apt sources: | # Add the repository to Apt sources: | ||
第83行: | 第88行: | ||
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ | $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ | ||
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null | ||
# Use mirror instead: | |||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg | |||
sudo chmod a+r /etc/apt/keyrings/docker.asc | |||
echo \ | |||
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \ | |||
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ | |||
tee /etc/apt/sources.list.d/docker.list > /dev/null | |||
sudo apt-get update | sudo apt-get update | ||
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin | |||
sudo usermod -aG docker $USER | |||
sudo systemctl enable docker.service | |||
sudo systemctl enable containerd.service | |||
</syntaxhighlight> | |||
Generate containerd config using systemd: | |||
<syntaxhighlight lang="bash"> | |||
sudo containerd config default | sudo tee /etc/containerd/config.toml | |||
</syntaxhighlight> | |||
And modify it: | |||
<syntaxhighlight lang="bash"> | |||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] | |||
... | |||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] | |||
SystemdCgroup = true | |||
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6" | |||
</syntaxhighlight> | |||
Restart the service: | |||
<syntaxhighlight lang="bash"> | |||
sudo systemctl restart containerd | |||
</syntaxhighlight> | |||
The above steps is required, otherwise might get error: | |||
validate CRI v1 runtime API for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService[preflight | |||
=== Install Kubeadm=== | |||
<ref>https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</ref> | |||
<syntaxhighlight lang="bash"> | |||
sudo apt-get install -y apt-transport-https ca-certificates curl gpg | |||
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg | |||
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list | |||
sudo apt-get update | |||
sudo apt-get install -y kubelet kubeadm kubectl | |||
sudo apt-mark hold kubelet kubeadm kubectl | |||
</syntaxhighlight> | |||
=== (Optional) Configure docker/containerd proxy === | |||
proxy server: | |||
<syntaxhighlight lang="bash"> | |||
</syntaxhighlight> | |||
docker:<ref>https://docs.docker.com/config/daemon/systemd/</ref> | |||
<ref>https://e-whisper.com/posts/36730/</ref> | |||
<syntaxhighlight lang="bash"> | |||
sudo mkdir -p /etc/systemd/system/docker.service.d | |||
vim /etc/systemd/system/docker.service.d/http-proxy.conf | |||
# Must configure HTTPS_PROXY | |||
[Service] | |||
Environment="HTTP_PROXY=http://user:password@riguz.com:8080/" | |||
Environment="HTTPS_PROXY=http://user:password@riguz.com:8080/" | |||
</syntaxhighlight> | |||
Also create /etc/systemd/system/containerd.service.d/http-proxy.conf with same content. | |||
Must restart the servecie: | |||
<syntaxhighlight lang="bash"> | |||
sudo systemctl daemon-reload | |||
sudo systemctl restart docker | |||
sudo systemctl restart containerd | |||
systemctl show --property=Environment docker | |||
</syntaxhighlight> | |||
=== (Master) Create cluster === | |||
<ref>https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/</ref> | |||
<syntaxhighlight lang="bash"> | |||
sudo systemctl start kubelet | |||
sudo systemctl enable kubelet | |||
MASTER_IP="10.19.30.61" | |||
NODENAME=$(hostname -s) | |||
POD_CIDR="192.168.0.0/16" | |||
KUBERNETES_VERSION="v1.29.3" | |||
kubeadm init \ | |||
--pod-network-cidr=$POD_CIDR \ | |||
--apiserver-advertise-address $MASTER_IP \ | |||
--node-name $NODENAME | |||
</syntaxhighlight> | |||
It takes long time to pull images, so we can pull images first: | |||
<syntaxhighlight lang="bash"> | |||
kubeadm config images list | |||
kubeadm config images pull | |||
</syntaxhighlight> | |||
to view images in local host:<ref>https://serverfault.com/questions/1079369/kubeadm-with-containerd-cannot-use-locally-loaded-images</ref> | |||
<syntaxhighlight lang="bash"> | |||
ctr -n k8s.io images list | |||
</syntaxhighlight> | |||
=== (Optional) Dump and import images=== | |||
<syntaxhighlight lang="bash"> | |||
kubeadm config images list | |||
# download images in a server: | |||
sudo ctr images pull registry.k8s.io/kube-apiserver:v1.29.3 | |||
sudo ctr images pull registry.k8s.io/kube-controller-manager:v1.29.3 | |||
sudo ctr images pull registry.k8s.io/kube-scheduler:v1.29.3 | |||
sudo ctr images pull registry.k8s.io/kube-proxy:v1.29.3 | |||
sudo ctr images pull registry.k8s.io/coredns/coredns:v1.11.1 | |||
sudo ctr images pull registry.k8s.io/pause:3.9 | |||
sudo ctr images pull registry.k8s.io/etcd:3.5.12-0 | |||
# export | |||
sudo ctr images export kubeadm-1.29.3-images.tar registry.k8s.io/kube-apiserver:v1.29.3 registry.k8s.io/kube-controller-manager:v1.29.3 registry.k8s.io/kube-scheduler:v1.29.3 registry.k8s.io/kube-proxy:v1.29.3 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 | |||
# import | |||
ctr -n k8s.io images import kubeadm-1.29.3-images.tar | |||
# Verify all images has been pulled: | |||
kubeadm config images pull | |||
W0410 16:58:19.699693 323494 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": dial tcp 151.101.89.55:443: i/o timeout (Client.Timeout exceeded while awaiting headers) | |||
W0410 16:58:19.699757 323494 version.go:105] falling back to the local client version: v1.29.3 | |||
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.29.3 | |||
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.29.3 | |||
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.29.3 | |||
[config/images] Pulled registry.k8s.io/kube-proxy:v1.29.3 | |||
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.1 | |||
[config/images] Pulled registry.k8s.io/pause:3.9 | |||
[config/images] Pulled registry.k8s.io/etcd:3.5.12-0 | |||
</syntaxhighlight> | |||
=== Generate config === | |||
<syntaxhighlight lang="bash"> | |||
# non-root user | |||
mkdir -p $HOME/.kube | |||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | |||
sudo chown $(id -u):$(id -g) $HOME/.kube/config | |||
# root | |||
export KUBECONFIG=/etc/kubernetes/admin.conf | |||
</syntaxhighlight> | |||
=== Install CNI plugin === | |||
<ref>https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart</ref> | |||
<syntaxhighlight lang="bash"> | |||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/tigera-operator.yaml | |||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/custom-resources.yaml | |||
watch kubectl get pods -n calico-system | |||
NAME READY STATUS RESTARTS AGE | |||
calico-kube-controllers-7bc6b5bb8-5dnsf 1/1 Running 0 26m | |||
calico-node-jdw2v 1/1 Running 0 26m | |||
calico-typha-5c754949c6-qhfwz 1/1 Running 0 26m | |||
csi-node-driver-9wmmd 2/2 Running 0 26m | |||
</syntaxhighlight> | |||
By default, your cluster will not schedule Pods on the control plane nodes for security reasons. If you want to be able to schedule Pods on the control plane nodes: | |||
<syntaxhighlight lang="bash"> | |||
kubectl taint nodes --all node-role.kubernetes.io/master- | |||
</syntaxhighlight> | |||
To dump images: | |||
<syntaxhighlight lang="bash"> | |||
sudo ctr images pull docker.io/calico/apiserver:v3.27.3 | |||
sudo ctr images pull docker.io/calico/cni:v3.27.3 | |||
sudo ctr images pull docker.io/calico/csi:v3.27.3 | |||
sudo ctr images pull docker.io/calico/kube-controllers:v3.27.3 | |||
sudo ctr images pull docker.io/calico/node-driver-registrar:v3.27.3 | |||
sudo ctr images pull docker.io/calico/node:v3.27.3 | |||
sudo ctr images pull docker.io/calico/pod2daemon-flexvol:v3.27.3 | |||
sudo ctr images pull docker.io/calico/typha:v3.27.3 | |||
sudo ctr images pull quay.io/tigera/operator:v1.32.7 | |||
sudo ctr images export calico-3.27.3.tar docker.io/calico/apiserver:v3.27.3 docker.io/calico/cni:v3.27.3 docker.io/calico/csi:v3.27.3 docker.io/calico/kube-controllers:v3.27.3 docker.io/calico/node-driver-registrar:v3.27.3 docker.io/calico/node:v3.27.3 docker.io/calico/pod2daemon-flexvol:v3.27.3 docker.io/calico/typha:v3.27.3 quay.io/tigera/operator:v1.32.7 | |||
</syntaxhighlight> | |||
===Join nodes=== | |||
<syntaxhighlight lang="bash"> | |||
kubeadm join 10.19.30.61:6443 --token xxx \ | |||
--node-name node02 \ | |||
--discovery-token-ca-cert-hash xxx | |||
</syntaxhighlight> | |||
== HELM == | |||
<syntaxhighlight lang="bash"> | |||
wget https://get.helm.sh/helm-v3.14.4-linux-amd64.tar.gz | |||
tar -zxvf helm-v3.14.4-linux-amd64.tar.gz | |||
install linux-amd64/helm /usr/local/bin/helm | |||
</syntaxhighlight> | |||
== NFS == | |||
<syntaxhighlight lang="bash"> | |||
apt install nfs-common | |||
# try to mount it | |||
mount -t nfs -o vers=3,nolock,proto=tcp,noresvport 10.19.31.01:/cfs-xxx /mnt/tmpnfs | |||
</syntaxhighlight> | |||
<syntaxhighlight lang="bash"> | |||
sudo ctr images pull registry.k8s.io/sig-storage/nfsplugin:v4.6.0 | |||
sudo ctr images pull registry.k8s.io/sig-storage/csi-provisioner:v3.6.2 | |||
sudo ctr images pull registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2 | |||
sudo ctr images pull registry.k8s.io/sig-storage/livenessprobe:v2.11.0 | |||
sudo ctr images pull registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1 | |||
sudo ctr images pull registry.k8s.io/sig-storage/snapshot-controller:v6.3.2 | |||
sudo ctr images export nfs-csi-4.6.0.tar registry.k8s.io/sig-storage/nfsplugin:v4.6.0 registry.k8s.io/sig-storage/csi-provisioner:v3.6.2 registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2 registry.k8s.io/sig-storage/livenessprobe:v2.11.0 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1 registry.k8s.io/sig-storage/snapshot-controller:v6.3.2 | |||
wget https://github.com/kubernetes-csi/csi-driver-nfs/archive/refs/tags/v4.6.0.tar.gz | |||
tar -zxvf v4.6.0.tar.gz | |||
helm install csi-derver-nfs ./csi-driver-nfs -n kube-system | |||
</syntaxhighlight> | |||
Create a storage class: | |||
<syntaxhighlight lang="bash"> | |||
apiVersion: storage.k8s.io/v1 | |||
kind: StorageClass | |||
metadata: | |||
name: nfs-csi | |||
provisioner: nfs.csi.k8s.io | |||
parameters: | |||
server: ********* | |||
share: /cfs-***** | |||
reclaimPolicy: Delete | |||
volumeBindingMode: Immediate | |||
mountOptions: | |||
- nfsvers=3 | |||
- nolock | |||
- proto=tcp | |||
- noresvport | |||
</syntaxhighlight> | |||
<syntaxhighlight lang="bash"> | |||
kubectl apply -f nfs-storageclass.yaml | |||
kubectl get storageclass | |||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE | |||
nfs-csi nfs.csi.k8s.io Delete Immediate false 5s | |||
</syntaxhighlight> | |||
Test use nfs: | |||
<syntaxhighlight lang="bash"> | |||
--- | |||
apiVersion: v1 | |||
kind: PersistentVolumeClaim | |||
metadata: | |||
name: pvc-nfs-dynamic | |||
spec: | |||
accessModes: | |||
- ReadWriteMany | |||
resources: | |||
requests: | |||
storage: 1Gi | |||
storageClassName: nfs-csi | |||
</syntaxhighlight> | |||
<syntaxhighlight lang="bash"> | |||
kubectl get pvc | |||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE | |||
pvc-nfs-dynamic Bound pvc-25aef236-bc5b-413a-9cb6-5616ad060f96 1Gi RWX nfs-csi <unset> 7s | |||
</syntaxhighlight> | |||
Set nfs csi as default storage class: | |||
<syntaxhighlight lang="bash"> | |||
kubectl patch storageclass nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' | |||
</syntaxhighlight> | |||
= Metric server = | |||
<syntaxhighlight lang="bash"> | |||
sudo ctr images pull registry.k8s.io/metrics-server/metrics-server:v0.7.1 | |||
sudo ctr images export metrics-0.7.1.tar registry.k8s.io/metrics-server/metrics-server:v0.7.1 | |||
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.1/components.yaml | |||
# add | |||
--kubelet-insecure-tls | |||
kubectl apply -f components.yaml | |||
</syntaxhighlight> | |||
=Dashboard= | |||
<syntaxhighlight lang="bash"> | |||
sudo ctr images pull docker.io/kubernetesui/dashboard-auth:1.1.3 | |||
sudo ctr images pull docker.io/kubernetesui/dashboard-api:1.4.3 | |||
sudo ctr images pull docker.io/kubernetesui/dashboard-web:1.3.0 | |||
sudo ctr images pull docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1 | |||
# other images | |||
# kong:3.6 | |||
sudo ctr images export dashboard-7.3.1.tar docker.io/kubernetesui/dashboard-auth:1.1.3 docker.io/kubernetesui/dashboard-api:1.4.3 docker.io/kubernetesui/dashboard-web:1.3.0 docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1wget | |||
</syntaxhighlight> | |||
<syntaxhighlight lang="bash"> | |||
wget https://github.com/kubernetes/dashboard/releases/download/kubernetes-dashboard-7.3.1/kubernetes-dashboard-7.3.1.tgz | |||
tar -zxvf kubernetes-dashboard-7.3.1.tgz | |||
helm upgrade --install kubernetes-dashboard ./kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard | |||
</syntaxhighlight> | |||
= LoadBalancer = | |||
<ref>https://metallb.universe.tf/installation/</ref> | |||
<ref>https://www.lixueduan.com/posts/cloudnative/01-metallb/</ref> | |||
<ref>https://metallb.io/installation/</ref> | |||
<syntaxhighlight lang="bash"> | |||
kubectl edit configmap -n kube-system kube-proxy | |||
apiVersion: kubeproxy.config.k8s.io/v1alpha1 | |||
kind: KubeProxyConfiguration | |||
mode: "ipvs" | |||
ipvs: | |||
strictARP: true | |||
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml | |||
kubectl get pods -n metallb-system -o wide | |||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES | |||
controller-756c6b677-ggl74 1/1 Running 0 12m 192.168.196.152 node01 <none> <none> | |||
speaker-5m9p7 1/1 Running 0 12m 10.19.30.61 master <none> <none> | |||
speaker-pmw7p 1/1 Running 0 12m 10.19.30.13 node02 <none> <none> | |||
speaker-tmst5 1/1 Running 0 12m 10.19.30.64 node01 <none> <none> | |||
</syntaxhighlight> | |||
Confugre ip address pool: | |||
<syntaxhighlight lang="yaml"> | |||
apiVersion: metallb.io/v1beta1 | |||
kind: IPAddressPool | |||
metadata: | |||
name: first-pool | |||
namespace: metallb-system | |||
spec: | |||
addresses: | |||
- 172.20.0.0/24 | |||
</syntaxhighlight> | |||
Congure L2 mode: | |||
<syntaxhighlight lang="yaml"> | |||
apiVersion: metallb.io/v1beta1 | |||
kind: L2Advertisement | |||
metadata: | |||
name: example | |||
namespace: metallb-system | |||
spec: | |||
ipAddressPools: | |||
- first-pool | |||
</syntaxhighlight> | |||
= Traefik Ingress= | |||
<ref>https://platform9.com/learn/v1.0/tutorials/traefik-ingress</ref> | |||
<syntaxhighlight lang="bash"> | |||
helm repo add traefik https://helm.traefik.io/traefik | |||
helm install traefik traefik/traefik -n kube-system | |||
helm install traefik traefik/traefik -n kube-system | |||
kubectl get ingressclass | |||
NAME CONTROLLER PARAMETERS AGE | |||
traefik traefik.io/ingress-controller <none> 45s | |||
</syntaxhighlight> | |||
= (New) Traefix IngressRoute = | |||
<ref>https://doc.traefik.io/traefik/providers/kubernetes-crd/</ref> | |||
<syntaxhighlight lang="bash"> | |||
# Install Traefik Resource Definitions: | |||
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.1/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml | |||
# Install RBAC for Traefik: | |||
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.1/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml | |||
</syntaxhighlight> | |||
=Nginx Ingress= | |||
<syntaxhighlight lang="bash"> | |||
sudo ctr images pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0 | |||
sudo ctr images pull registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334 | |||
sudo ctr images pull registry.k8s.io/ingress-nginx/controller:v1.10.0 | |||
sudo ctr images pull registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c | |||
sudo ctr images export ingress-nginx-1.10.tar registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334 registry.k8s.io/ingress-nginx/controller:v1.10.0 registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c | |||
ctr -n k8s.io images import ingress-nginx-1.10.tar | |||
helm upgrade --install ingress-nginx ingress-nginx \ | |||
--repo https://kubernetes.github.io/ingress-nginx \ | |||
--namespace ingress-nginx --create-namespace | |||
</syntaxhighlight> | </syntaxhighlight> | ||
[[Category:Linux/Unix]] | |||
[[Category:Kubernetes]] |
2024年10月12日 (六) 03:19的最新版本
Ubuntu 22.04
System preparation
Updrage
sudo apt update
sudo apt upgrade
do-release-update
Mount data disk
mkfs.xfs /dev/vdb
lsof /var
mv /var/ /var0
mkdir /mnt/newvar/
mount /dev/vdb /mnt/newvar/
rsync -aqxP /var0/* /mnt/newvar/
umount /mnt/newvar
mkdir /var
mount /dev/vdb /var
vim /etc/fstab
# /dev/vdb /var xfs defaults 0 0
System configuration
hostnamectl set-hostname master.xx.com
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# seems no longer required in 1.31
# sudo modprobe overlay
# sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
Verify:
root@vm10-19-30-61:~# lsmod | grep br_netfilter
br_netfilter 32768 0
bridge 307200 1 br_netfilter
root@vm10-19-30-61:~# lsmod | grep overlay
overlay 151552 0
root@vm10-19-30-61:~# sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
Disable swap
# check if swap is disabled
swapon -s
Install Kubernetes
Containerd runtime
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Use mirror instead:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
Generate containerd config using systemd:
sudo containerd config default | sudo tee /etc/containerd/config.toml
And modify it:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
Restart the service:
sudo systemctl restart containerd
The above steps is required, otherwise might get error:
validate CRI v1 runtime API for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService[preflight
Install Kubeadm
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
(Optional) Configure docker/containerd proxy
proxy server:
sudo mkdir -p /etc/systemd/system/docker.service.d
vim /etc/systemd/system/docker.service.d/http-proxy.conf
# Must configure HTTPS_PROXY
[Service]
Environment="HTTP_PROXY=http://user:password@riguz.com:8080/"
Environment="HTTPS_PROXY=http://user:password@riguz.com:8080/"
Also create /etc/systemd/system/containerd.service.d/http-proxy.conf with same content. Must restart the servecie:
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart containerd
systemctl show --property=Environment docker
(Master) Create cluster
sudo systemctl start kubelet
sudo systemctl enable kubelet
MASTER_IP="10.19.30.61"
NODENAME=$(hostname -s)
POD_CIDR="192.168.0.0/16"
KUBERNETES_VERSION="v1.29.3"
kubeadm init \
--pod-network-cidr=$POD_CIDR \
--apiserver-advertise-address $MASTER_IP \
--node-name $NODENAME
It takes long time to pull images, so we can pull images first:
kubeadm config images list
kubeadm config images pull
to view images in local host:[8]
ctr -n k8s.io images list
(Optional) Dump and import images
kubeadm config images list
# download images in a server:
sudo ctr images pull registry.k8s.io/kube-apiserver:v1.29.3
sudo ctr images pull registry.k8s.io/kube-controller-manager:v1.29.3
sudo ctr images pull registry.k8s.io/kube-scheduler:v1.29.3
sudo ctr images pull registry.k8s.io/kube-proxy:v1.29.3
sudo ctr images pull registry.k8s.io/coredns/coredns:v1.11.1
sudo ctr images pull registry.k8s.io/pause:3.9
sudo ctr images pull registry.k8s.io/etcd:3.5.12-0
# export
sudo ctr images export kubeadm-1.29.3-images.tar registry.k8s.io/kube-apiserver:v1.29.3 registry.k8s.io/kube-controller-manager:v1.29.3 registry.k8s.io/kube-scheduler:v1.29.3 registry.k8s.io/kube-proxy:v1.29.3 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0
# import
ctr -n k8s.io images import kubeadm-1.29.3-images.tar
# Verify all images has been pulled:
kubeadm config images pull
W0410 16:58:19.699693 323494 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": dial tcp 151.101.89.55:443: i/o timeout (Client.Timeout exceeded while awaiting headers)
W0410 16:58:19.699757 323494 version.go:105] falling back to the local client version: v1.29.3
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.29.3
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.29.3
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.29.3
[config/images] Pulled registry.k8s.io/kube-proxy:v1.29.3
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.1
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.12-0
Generate config
# non-root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# root
export KUBECONFIG=/etc/kubernetes/admin.conf
Install CNI plugin
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/custom-resources.yaml
watch kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7bc6b5bb8-5dnsf 1/1 Running 0 26m
calico-node-jdw2v 1/1 Running 0 26m
calico-typha-5c754949c6-qhfwz 1/1 Running 0 26m
csi-node-driver-9wmmd 2/2 Running 0 26m
By default, your cluster will not schedule Pods on the control plane nodes for security reasons. If you want to be able to schedule Pods on the control plane nodes:
kubectl taint nodes --all node-role.kubernetes.io/master-
To dump images:
sudo ctr images pull docker.io/calico/apiserver:v3.27.3
sudo ctr images pull docker.io/calico/cni:v3.27.3
sudo ctr images pull docker.io/calico/csi:v3.27.3
sudo ctr images pull docker.io/calico/kube-controllers:v3.27.3
sudo ctr images pull docker.io/calico/node-driver-registrar:v3.27.3
sudo ctr images pull docker.io/calico/node:v3.27.3
sudo ctr images pull docker.io/calico/pod2daemon-flexvol:v3.27.3
sudo ctr images pull docker.io/calico/typha:v3.27.3
sudo ctr images pull quay.io/tigera/operator:v1.32.7
sudo ctr images export calico-3.27.3.tar docker.io/calico/apiserver:v3.27.3 docker.io/calico/cni:v3.27.3 docker.io/calico/csi:v3.27.3 docker.io/calico/kube-controllers:v3.27.3 docker.io/calico/node-driver-registrar:v3.27.3 docker.io/calico/node:v3.27.3 docker.io/calico/pod2daemon-flexvol:v3.27.3 docker.io/calico/typha:v3.27.3 quay.io/tigera/operator:v1.32.7
Join nodes
kubeadm join 10.19.30.61:6443 --token xxx \
--node-name node02 \
--discovery-token-ca-cert-hash xxx
HELM
wget https://get.helm.sh/helm-v3.14.4-linux-amd64.tar.gz
tar -zxvf helm-v3.14.4-linux-amd64.tar.gz
install linux-amd64/helm /usr/local/bin/helm
NFS
apt install nfs-common
# try to mount it
mount -t nfs -o vers=3,nolock,proto=tcp,noresvport 10.19.31.01:/cfs-xxx /mnt/tmpnfs
sudo ctr images pull registry.k8s.io/sig-storage/nfsplugin:v4.6.0
sudo ctr images pull registry.k8s.io/sig-storage/csi-provisioner:v3.6.2
sudo ctr images pull registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2
sudo ctr images pull registry.k8s.io/sig-storage/livenessprobe:v2.11.0
sudo ctr images pull registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1
sudo ctr images pull registry.k8s.io/sig-storage/snapshot-controller:v6.3.2
sudo ctr images export nfs-csi-4.6.0.tar registry.k8s.io/sig-storage/nfsplugin:v4.6.0 registry.k8s.io/sig-storage/csi-provisioner:v3.6.2 registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2 registry.k8s.io/sig-storage/livenessprobe:v2.11.0 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1 registry.k8s.io/sig-storage/snapshot-controller:v6.3.2
wget https://github.com/kubernetes-csi/csi-driver-nfs/archive/refs/tags/v4.6.0.tar.gz
tar -zxvf v4.6.0.tar.gz
helm install csi-derver-nfs ./csi-driver-nfs -n kube-system
Create a storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: *********
share: /cfs-*****
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- nfsvers=3
- nolock
- proto=tcp
- noresvport
kubectl apply -f nfs-storageclass.yaml
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Delete Immediate false 5s
Test use nfs:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-dynamic
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs-csi
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc-nfs-dynamic Bound pvc-25aef236-bc5b-413a-9cb6-5616ad060f96 1Gi RWX nfs-csi <unset> 7s
Set nfs csi as default storage class:
kubectl patch storageclass nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Metric server
sudo ctr images pull registry.k8s.io/metrics-server/metrics-server:v0.7.1
sudo ctr images export metrics-0.7.1.tar registry.k8s.io/metrics-server/metrics-server:v0.7.1
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.1/components.yaml
# add
--kubelet-insecure-tls
kubectl apply -f components.yaml
Dashboard
sudo ctr images pull docker.io/kubernetesui/dashboard-auth:1.1.3
sudo ctr images pull docker.io/kubernetesui/dashboard-api:1.4.3
sudo ctr images pull docker.io/kubernetesui/dashboard-web:1.3.0
sudo ctr images pull docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1
# other images
# kong:3.6
sudo ctr images export dashboard-7.3.1.tar docker.io/kubernetesui/dashboard-auth:1.1.3 docker.io/kubernetesui/dashboard-api:1.4.3 docker.io/kubernetesui/dashboard-web:1.3.0 docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1wget
wget https://github.com/kubernetes/dashboard/releases/download/kubernetes-dashboard-7.3.1/kubernetes-dashboard-7.3.1.tgz
tar -zxvf kubernetes-dashboard-7.3.1.tgz
helm upgrade --install kubernetes-dashboard ./kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
LoadBalancer
kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml
kubectl get pods -n metallb-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
controller-756c6b677-ggl74 1/1 Running 0 12m 192.168.196.152 node01 <none> <none>
speaker-5m9p7 1/1 Running 0 12m 10.19.30.61 master <none> <none>
speaker-pmw7p 1/1 Running 0 12m 10.19.30.13 node02 <none> <none>
speaker-tmst5 1/1 Running 0 12m 10.19.30.64 node01 <none> <none>
Confugre ip address pool:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 172.20.0.0/24
Congure L2 mode:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
Traefik Ingress
helm repo add traefik https://helm.traefik.io/traefik
helm install traefik traefik/traefik -n kube-system
helm install traefik traefik/traefik -n kube-system
kubectl get ingressclass
NAME CONTROLLER PARAMETERS AGE
traefik traefik.io/ingress-controller <none> 45s
(New) Traefix IngressRoute
# Install Traefik Resource Definitions:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.1/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
# Install RBAC for Traefik:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.1/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
Nginx Ingress
sudo ctr images pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
sudo ctr images pull registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334
sudo ctr images pull registry.k8s.io/ingress-nginx/controller:v1.10.0
sudo ctr images pull registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
sudo ctr images export ingress-nginx-1.10.tar registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334 registry.k8s.io/ingress-nginx/controller:v1.10.0 registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
ctr -n k8s.io images import ingress-nginx-1.10.tar
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
- ↑ https://kubernetes.io/docs/setup/production-environment/container-runtimes/#forwarding-ipv4-and-letting-iptables-see-bridged-traffic
- ↑ https://docs.docker.com/engine/install/ubuntu/
- ↑ https://mirrors.tuna.tsinghua.edu.cn/help/docker-ce/
- ↑ https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
- ↑ https://docs.docker.com/config/daemon/systemd/
- ↑ https://e-whisper.com/posts/36730/
- ↑ https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
- ↑ https://serverfault.com/questions/1079369/kubeadm-with-containerd-cannot-use-locally-loaded-images
- ↑ https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
- ↑ https://metallb.universe.tf/installation/
- ↑ https://www.lixueduan.com/posts/cloudnative/01-metallb/
- ↑ https://metallb.io/installation/
- ↑ https://platform9.com/learn/v1.0/tutorials/traefik-ingress
- ↑ https://doc.traefik.io/traefik/providers/kubernetes-crd/