1 环境介绍

虽然 kubeadm, kops, kubespray 以及 rke, kubesphere 等工具可以快速部署 K8s 集群,但是依然会有很多人热衷与使用二进制部署 K8s 集群。

二进制部署可以加深对 K8s 各组件的理解,可以灵活地将各个组件部署到不同的机器,以满足自身的要求。还可以生成一个超长时间自签证书,比如 99 年,免去忘记更新证书过期带来的生产事故。

本文基于当前(2021-12-31)最新版本 K8s 1.23.1,总体和网上的 1.20,1.22 等版本的部署方式没有太大的区别,主要参考了韩先超老师的 K8s 1.20 版本的二进制部署教程。

另外,我的环境是使用 m1 芯片的 macbook 运行的 ubuntu 20.04 TLS 虚拟机搭建,因此本次环境搭建 K8s 是基于 arm64 架构的。

1.0 书写约定

  • 命令行输入,均以 符号表示
  • 注释使用 #// 表示
  • 执行命令输出结果,以空行分隔

1.1 规划

角色主机名IP组件
master nodeubuntu-k8s-master-0110.211.55.4etcd, kube-apiserver, kube-controller-manager,
kube-scheduler, kube-proxy, kubelet
worker nodeubuntu-k8s-worker-0110.211.55.5kubelet, kube-proxy

1.2 环境配置

  • 设置主机名

    1. # 10.211.55.4 主机
    2. sudo hostnamectl set-hostname ubuntu-k8s-master-01
    3. # 10.211.55.5 主机
    4. sudo hostnamectl set-hostname ubuntu-k8s-worker-01
  • 时间同步

    1. # 设置时区
    2. sudo timedatectl set-timezone Asia/Shanghai
    3. # 安装时间同步服务
    4. sudo apt-get update
    5. sudo apt-get install chrony
    6. sudo systemctl enable --now chrony
  • 主机名解析

    1. sudo cat >> /etc/hosts << EOF
    2. 10.211.55.4 ubuntu-k8s-master-01
    3. 10.211.55.5 ubuntu-k8s-worker-01
    4. EOF
  • 创建 kubernetes 证书存放目录

    1. mkdir /etc/kubernetes/pki -p

1.3 下载 k8s 二进制程序

从官方发布地址下载二进制包 下载地址

下载 Server Binaries 即可,这个包含了所有所需的二进制文件。解压后,复制二进制 kube-apiserver, kube-scheduler, kube-controller-manager, kube-proxy,kubelet, kubectl 到 master 节点 /usr/local/bin 目录下,复制二进制 kube-proxy,kubelet 到 worker 节点 /usr/local/bin 目录下。

  1. ll /usr/local/bin/kube*
  2. -rwxr-xr-x 1 root root 128516096 Dec 29 14:59 /usr/local/bin/kube-apiserver
  3. -rwxr-xr-x 1 root root 118489088 Dec 29 14:59 /usr/local/bin/kube-controller-manager
  4. -rwxr-xr-x 1 root root 46202880 Dec 29 14:59 /usr/local/bin/kubectl
  5. -rwxr-xr-x 1 root root 122352824 Dec 29 14:59 /usr/local/bin/kubelet
  6. -rwxr-xr-x 1 root root 43581440 Dec 29 14:59 /usr/local/bin/kube-proxy
  7. -rwxr-xr-x 1 root root 49020928 Dec 29 14:59 /usr/local/bin/kube-scheduler

2 安装 docker

参考地址 安装docker,Docker 需要在各个节点上安装

Docker 配置

  1. # 把 docker 的 cgroupdriver 设置为 systemd,因为 kubelet 默认是 systemd,不设置会导致 kubelet 启动失败
  2. sudo cat >> /etc/docker/daemon.json << EOF
  3. {
  4. "exec-opts": ["native.cgroupdriver=systemd"]
  5. }
  6. EOF

安装

  1. # 卸载旧版本,没有安装过则不用操作
  2. sudo apt-get remove docker docker-engine docker.io containerd runc
  3. # 设置仓库源
  4. sudo apt-get update
  5. sudo apt-get install ca-certificates curl gnupg lsb-release
  6. # 添加 GPG Key
  7. curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
  8. # 添加软件源
  9. echo \
  10. "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  11. $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  12. # 安装 docker
  13. sudo apt-get update
  14. sudo apt-get install docker-ce docker-ce-cli containerd.io
  15. # 设置 docker 开机自启
  16. sudo systemctl enable docker

3 签署 ca 证书

3.1 编译 cfssl

cfssl 是一款证书签署工具,使用 cfssl 工具可以很简化证书签署过程,方便颁发自签证书。

cfssl 官方没有发行 arm64 版本的二进制程序,因此需要自行编译,如果在 amd64 架构下部署,则直接下载官方发布的二进制程序即可。

  1. # 下载 golang
  2. wget https://dl.google.com/go/go1.17.5.linux-arm64.tar.gz
  3. # 解压到 /usr/local/go
  4. sudo tar xf go1.17.5.linux-arm64.tar.gz -C /usr/local/
  5. # 验证
  6. go version
  7. # 编译 cfssl, cfssljson
  8. go install github.com/cloudflare/cfssl/cmd/cfssl
  9. go install github.com/cloudflare/cfssl/cmd/cfssljson
  10. # 复制二进制到 /usr/local/bin
  11. cp ~/go/bin/cfssl ~/go/bin/cfssljson /usr/local/bin

3.2 签署 ca 证书

  1. # 签署的证书统一放到 ~/ssl 目录,签署后复制到 /etc/kubernetes/pki 目录
  2. mkdir ~/ssl
  3. cd ~/ssl
  4. # 证书配置文件
  5. # expiry 字段为证书有效期,这里写了将近 99 年,,基本不用担心证书过期的问题。建议写 10 年以上,反正不用钱
  6. cat > ca-ssl.json << EOF
  7. {
  8. "signing": {
  9. "default": {
  10. "expiry": "867240h"
  11. },
  12. "profiles": {
  13. "kubernetes": {
  14. "usages": [
  15. "signing",
  16. "key encipherment",
  17. "server auth",
  18. "client auth"
  19. ],
  20. "expiry": "867240h"
  21. }
  22. }
  23. }
  24. }
  25. EOF
  26. # ca 证书签署申请
  27. cat > ca-csr.json << EOF
  28. {
  29. "CN": "kubernetes",
  30. "key": {
  31. "algo": "rsa",
  32. "size": 2048
  33. },
  34. "names": [
  35. {
  36. "C": "CN",
  37. "ST": "Guangdong",
  38. "L": "Zhuhai",
  39. "O": "k8s",
  40. "OU": "system"
  41. }
  42. ],
  43. "ca": {
  44. "expiry": "867240h"
  45. }
  46. }
  47. EOF
  48. # 签署 ca 证书
  49. cfssl gencert -initca ca-csr.json | cfssljson -bare ca
  50. # 验证结果,会生成两个证书文件
  51. ll ca*pem
  52. -rw------- 1 haxi haxi 1675 Dec 30 11:32 ca-key.pem
  53. -rw-rw-r-- 1 haxi haxi 1314 Dec 30 11:32 ca.pem
  54. # 复制 ca 证书到 /etc/kubernetes/pki
  55. sudo cp ca*pem /etc/kubernetes/pki

4 部署 etcd

etcd 版本选择的是最新版本 3.5.1,下载二进制 etcd下载链接

4.1 颁发证书

  1. # etcd 证书签署申请
  2. # hosts 字段中,IP 为所有 etcd 集群节点地址,这里可以做好规划,预留几个 IP,以备以后扩容。我这里写 6 个 IP,将来做 etcd 集群,以及预留一些 IP 备用
  3. cat > etcd-csr.json << EOF
  4. {
  5. "CN": "etcd",
  6. "hosts": [
  7. "127.0.0.1",
  8. "10.211.55.2",
  9. "10.211.55.3",
  10. "10.211.55.4",
  11. "10.211.55.22",
  12. "10.211.55.23"
  13. "10.211.55.24"
  14. ],
  15. "key": {
  16. "algo": "rsa",
  17. "size": 2048
  18. },
  19. "names": [
  20. {
  21. "C": "CN",
  22. "ST": "Guangdong",
  23. "L": "Zhuhai",
  24. "O": "k8s",
  25. "OU": "system"
  26. }
  27. ]
  28. }
  29. EOF
  30. # 签署 etcd 证书
  31. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
  32. # 验证结果,会生成两个证书文件
  33. ll etcd*pem
  34. -rw------- 1 haxi haxi 1679 Dec 30 11:32 etcd-key.pem
  35. -rw-rw-r-- 1 haxi haxi 1440 Dec 30 11:32 etcd.pem
  36. # 复制 etcd 证书到 /etc/kubernetes/pki
  37. sudo cp etcd*pem /etc/kubernetes/pki

4.2 部署 etcd

下载二进制 etcd下载链接 并解压,将二进制程序 etcd etcdctl 复制到 /usr/local/bin 目录下

  1. ll /usr/local/bin/etcd*
  2. -rwxrwxr-x 1 root root 21823488 Dec 29 14:13 /usr/local/bin/etcd
  3. -rwxrwxr-x 1 root root 16711680 Dec 29 14:13 /usr/local/bin/etcdctl

编写服务配置文件

  1. sudo cat > /etc/etcd/etcd.conf << EOF
  2. ETCD_NAME="etcd1"
  3. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  4. ETCD_LISTEN_PEER_URLS="https://10.211.55.4:2380"
  5. ETCD_LISTEN_CLIENT_URLS="https://10.211.55.4:2379"
  6. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.211.55.4:2380"
  7. ETCD_ADVERTISE_CLIENT_URLS="https://10.211.55.4:2379"
  8. ETCD_INITIAL_CLUSTER="etcd1=https://10.211.55.4:2380"
  9. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  10. ETCD_INITIAL_CLUSTER_STATE="new"
  11. EOF
  12. # 配置文件解释
  13. ETCD_NAME:节点名称,集群中唯一
  14. ETCD_DATA_DIR 数据保存目录
  15. ETCD_LISTEN_PEER_URLS:集群内部通信监听地址
  16. ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
  17. ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
  18. ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
  19. ETCD_INITIAL_CLUSTER:集群节点地址列表
  20. ETCD_INITIAL_CLUSTER_TOKEN:集群通信token
  21. ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

编写服务启动脚本

  1. sudo cat > /lib/systemd/system/etcd.service << EOF
  2. [Unit]
  3. Description=etcd server
  4. After=network.target
  5. After=network-online.target
  6. Wants=network-online.target
  7. [Service]
  8. Type=notify
  9. EnvironmentFile=-/etc/etcd/etcd.conf
  10. WorkingDirectory=/var/lib/etcd
  11. ExecStart=/usr/local/bin/etcd \
  12. --cert-file=/etc/kubernetes/pki/etcd.pem \
  13. --key-file=/etc/kubernetes/pki/etcd-key.pem \
  14. --trusted-ca-file=/etc/kubernetes/pki/ca.pem \
  15. --peer-cert-file=/etc/kubernetes/pki/etcd.pem \
  16. --peer-key-file=/etc/kubernetes/pki/etcd-key.pem \
  17. --peer-trusted-ca-file=/etc/kubernetes/pki/ca.pem \
  18. --peer-client-cert-auth \
  19. --client-cert-auth
  20. Restart=on-failure
  21. RestartSec=5
  22. LimitNOFILE=65535
  23. [Install]
  24. WantedBy=multi-user.target
  25. EOF

启动 etcd 服务

  1. systemctl daemon-reload
  2. systemctl enable --now etcd
  3. # 验证结果
  4. systemctl status etcd
  5. # 查看日志
  6. journalctl -u etcd

5 部署 kube-apiserver

5.1 颁发证书

  1. # apiserver 证书签署申请
  2. # hosts 字段中,IP 为所有 apiserver 节点地址,这里可以做好规划,预留几个 IP,以备以后扩容。我这里写 6 个 IP
  3. # 10.211.55.2 10.211.55.3 10.211.55.4 10.211.55.22 10.211.55.23 10.211.55.24
  4. # 10.96.0.1 是 service 网段的第一个 IP
  5. # kubernetes.default.svc.cluster.local 这一串是 apiserver 的 service 域名
  6. cat > etcd-csr.json << EOF
  7. {
  8. "CN": "kubernetes",
  9. "hosts": [
  10. "127.0.0.1",
  11. "10.211.55.2",
  12. "10.211.55.3",
  13. "10.211.55.4",
  14. "10.211.55.22",
  15. "10.211.55.23"
  16. "10.211.55.24"
  17. "10.96.0.1",
  18. "kubernetes",
  19. "kubernetes.default",
  20. "kubernetes.default.svc",
  21. "kubernetes.default.svc.cluster",
  22. "kubernetes.default.svc.cluster.local"
  23. ],
  24. "key": {
  25. "algo": "rsa",
  26. "size": 2048
  27. },
  28. "names": [
  29. {
  30. "C": "CN",
  31. "ST": "Guangdong",
  32. "L": "Zhuhai",
  33. "O": "k8s",
  34. "OU": "system"
  35. }
  36. ]
  37. }
  38. EOF
  39. # 签署 apiserver 证书
  40. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
  41. # 验证结果,会生成两个证书文件
  42. ll kube-apiserver*pem
  43. -rw------- 1 haxi haxi 1675 Dec 30 11:33 kube-apiserver-key.pem
  44. -rw-rw-r-- 1 haxi haxi 1590 Dec 30 11:33 kube-apiserver.pem
  45. # 复制 apiserver 证书到 /etc/kubernetes/pki
  46. sudo cp kube-apiserver*pem /etc/kubernetes/pki

5.2 部署 kube-apiserver

编写服务配置文件

  1. sudo cat > /etc/kubernetes/kube-apiserver.conf << EOF
  2. KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  3. --anonymous-auth=false \
  4. --bind-address=0.0.0.0 \
  5. --secure-port=6443 \
  6. --insecure-port=0 \
  7. --authorization-mode=Node,RBAC \
  8. --runtime-config=api/all=true \
  9. --enable-bootstrap-token-auth \
  10. --service-cluster-ip-range=10.96.0.0/16 \
  11. --token-auth-file=/etc/kubernetes/token.csv \
  12. --service-node-port-range=30000-32767 \
  13. --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem \
  14. --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem \
  15. --client-ca-file=/etc/kubernetes/pki/ca.pem \
  16. --kubelet-client-certificate=/etc/kubernetes/pki/kube-apiserver.pem \
  17. --kubelet-client-key=/etc/kubernetes/pki/kube-apiserver-key.pem \
  18. --service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
  19. --service-account-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
  20. --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  21. --etcd-cafile=/etc/kubernetes/pki/ca.pem \
  22. --etcd-certfile=/etc/kubernetes/pki/etcd.pem \
  23. --etcd-keyfile=/etc/kubernetes/pki/etcd-key.pem \
  24. --etcd-servers=https://10.211.55.4:2379 \
  25. --enable-swagger-ui=true \
  26. --allow-privileged=true \
  27. --apiserver-count=1 \
  28. --audit-log-maxage=30 \
  29. --audit-log-maxbackup=3 \
  30. --audit-log-maxsize=100 \
  31. --audit-log-path=/var/log/kube-apiserver-audit.log \
  32. --event-ttl=1h \
  33. --alsologtostderr=false \
  34. --log-dir=/var/log/kubernetes \
  35. --v=4"
  36. EOF

生成 token 文件

  1. sudo cat > /etc/kubernetes/token.csv << EOF
  2. $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  3. EOF

编写服务启动脚本

  1. sudo cat > /lib/systemd/system/kube-apiserver.service << "EOF"
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target network-online.target
  6. Wants=network-online.target
  7. [Service]
  8. Type=notify
  9. EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
  10. ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
  11. Restart=on-failure
  12. RestartSec=5
  13. LimitNOFILE=65535
  14. [Install]
  15. WantedBy=multi-user.target
  16. EOF

启动 kube-apiserver 服务

  1. systemctl daemon-reload
  2. systemctl enable --now kube-apiserver
  3. # 验证结果
  4. systemctl status kube-apiserver
  5. # 查看日志
  6. journalctl -u kube-apiserver

6 部署 kubectl

部署完 kube-apiserver 后,就可以部署 kubectl 了,因为 kubectl 可以验证 apiserver 是否已经正常工作了。

6.1 颁发证书

  1. # kubectl 证书签署申请
  2. # O 参数的值必须为 system:masters,因为这是 apiserver 一个内置好的角色,拥有集群管理的权限
  3. cat > kubectl-csr.json << EOF
  4. {
  5. "CN": "clusteradmin",
  6. "key": {
  7. "algo": "rsa",
  8. "size": 2048
  9. },
  10. "names": [
  11. {
  12. "C": "CN",
  13. "ST": "Guangdong",
  14. "L": "Zhuhai",
  15. "O": "system:masters",
  16. "OU": "system"
  17. }
  18. ]
  19. }
  20. EOF
  21. # 签署 kubectl 证书
  22. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubectl-csr.json | cfssljson -bare kubectl
  23. # 验证结果,会生成两个证书文件
  24. ll kubectl*pem
  25. -rw------- 1 haxi haxi 1675 Dec 30 11:34 kubectl-key.pem
  26. -rw-rw-r-- 1 haxi haxi 1415 Dec 30 11:34 kubectl.pem

6.2 生成配置文件

  1. kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kube.config
  2. kubectl config set-credentials clusteradmin --client-certificate=kubectl.pem --client-key=kubectl-key.pem --embed-certs=true --kubeconfig=kube.config
  3. kubectl config set-context kubernetes --cluster=kubernetes --user=clusteradmin --kubeconfig=kube.config
  4. kubectl config use-context kubernetes --kubeconfig=kube.config
  5. mkdir -p ~/.kube
  6. cp kube.config ~/.kube/config

6.3 获取集群信息

  1. kubectl cluster-info
  2. Kubernetes control plane is running at https://10.211.55.4:6443
  3. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  4. kubectl get all -A
  5. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  6. default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
  7. kubectl get cs
  8. Warning: v1 ComponentStatus is deprecated in v1.19+
  9. NAME STATUS MESSAGE ERROR
  10. etcd-0 Healthy {"health":"true","reason":""}

7 部署 kube-controller-manager

7.1 颁发证书

  1. # controller-manager 证书签署申请
  2. # hosts 字段中,IP 为所有 节点地址,这里可以做好规划,预留几个 IP,以备以后扩容。我这里写 6 个 IP
  3. cat > kube-controller-manager-csr.json << EOF
  4. {
  5. "CN": "system:kube-controller-manager",
  6. "hosts": [
  7. "127.0.0.1",
  8. "10.211.55.2",
  9. "10.211.55.3",
  10. "10.211.55.4",
  11. "10.211.55.22",
  12. "10.211.55.23",
  13. "10.211.55.24"
  14. ],
  15. "key": {
  16. "algo": "rsa",
  17. "size": 2048
  18. },
  19. "names": [
  20. {
  21. "C": "CN",
  22. "ST": "Guangdong",
  23. "L": "Zhuhai",
  24. "O": "system:kube-controller-manager",
  25. "OU": "system"
  26. }
  27. ]
  28. }
  29. EOF
  30. # 签署 controller-manager 证书
  31. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-acontroller-manager-csr.json | cfssljson -bare kube-controller-manager
  32. # 验证结果,会生成两个证书文件
  33. ll kube-controller-manager*pem
  34. -rw------- 1 haxi haxi 1679 Dec 30 12:13 kube-controller-manager-key.pem
  35. -rw-rw-r-- 1 haxi haxi 1513 Dec 30 12:13 kube-controller-manager.pem
  36. # 复制 controler-manager 证书到 /etc/kubernetes/pki
  37. sudo cp kube-controller-manager*pem /etc/kubernetes/pki

7.2 部署 kube-controller-manager

编写服务配置文件

  1. sudo cat > /etc/kubernetes/kube-controller-manager.conf << EOF
  2. KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
  3. --secure-port=10257 \
  4. --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  5. --service-cluster-ip-range=10.96.0.0/16 \
  6. --cluster-name=kubernetes \
  7. --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
  8. --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
  9. --cluster-signing-duration=867240h \
  10. --tls-cert-file=/etc/kubernetes/pki/kube-controller-manager.pem \
  11. --tls-private-key-file=/etc/kubernetes/pki/kube-controller-manager-key.pem \
  12. --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \
  13. --root-ca-file=/etc/kubernetes/pki/ca.pem \
  14. --leader-elect=true \
  15. --controllers=*,bootstrapsigner,tokencleaner \
  16. --use-service-account-credentials=true \
  17. --horizontal-pod-autoscaler-sync-period=10s \
  18. --alsologtostderr=true \
  19. --logtostderr=false \
  20. --log-dir=/var/log/kubernetes \
  21. --allocate-node-cidrs=true \
  22. --cluster-cidr=10.244.0.0/12 \
  23. --v=4"
  24. EOF

生成 kubeconfig

  1. kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kube-controller-manager.kubeconfig
  2. kubectl config set-credentials kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
  3. kubectl config set-context default --cluster=kubernetes --user=kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
  4. kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
  5. sudo cp kube-controller-manager.kubeconfig /etc/kubernetes/

编写服务启动脚本

  1. sudo cat > /lib/systemd/system/kube-controller-manager.service << "EOF"
  2. [Unit]
  3. Description=Kubernetes controller manager
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target network-online.target
  6. Wants=network-online.target
  7. [Service]
  8. EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
  9. ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
  10. Restart=on-failure
  11. RestartSec=5
  12. LimitNOFILE=65535
  13. [Install]
  14. WantedBy=multi-user.target
  15. EOF

启动 kube-controller-manager 服务

  1. systemctl daemon-reload
  2. systemctl enable --now kube-controller-manager
  3. # 验证结果
  4. systemctl status kube-controller-manager
  5. # 查看日志
  6. journalctl -u kube-controller-manager

查看组件状态

  1. Warning: v1 ComponentStatus is deprecated in v1.19+
  2. NAME STATUS MESSAGE ERROR
  3. controller-manager Healthy ok
  4. etcd-0 Healthy {"health":"true","reason":""}

8 部署 kube-scheduler

8.1 颁发证书

  1. # scheduler 证书签署申请
  2. # hosts 字段中,IP 为所有 节点地址,这里可以做好规划,预留几个 IP,以备以后扩容。我这里写 6 个 IP
  3. cat > kube-scheduler-csr.json << EOF
  4. {
  5. "CN": "system:kube-scheduler",
  6. "hosts": [
  7. "127.0.0.1",
  8. "10.211.55.2",
  9. "10.211.55.3",
  10. "10.211.55.4",
  11. "10.211.55.22",
  12. "10.211.55.23",
  13. "10.211.55.24"
  14. ],
  15. "key": {
  16. "algo": "rsa",
  17. "size": 2048
  18. },
  19. "names": [
  20. {
  21. "C": "CN",
  22. "ST": "Guangdong",
  23. "L": "Zhuhai",
  24. "O": "system:kube-scheduler",
  25. "OU": "system"
  26. }
  27. ]
  28. }
  29. EOF
  30. # 签署 scheduler 证书
  31. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
  32. # 验证结果,会生成两个证书文件
  33. ll kube-scheduler*pem
  34. -rw------- 1 haxi haxi 1679 Dec 30 13:19 kube-scheduler-key.pem
  35. -rw-rw-r-- 1 haxi haxi 1489 Dec 30 13:19 kube-scheduler.pem
  36. # 复制 scheduler 证书到 /etc/kubernetes/pki
  37. sudo cp kube-scheduler*pem /etc/kubernetes/pki

8.2 部署 kube-scheduler

编写服务配置文件

  1. sudo cat /etc/kubernetes/kube-scheduler.conf << EOF
  2. KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
  3. --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  4. --leader-elect=true \
  5. --alsologtostderr=true \
  6. --logtostderr=false \
  7. --log-dir=/var/log/kubernetes \
  8. --v=4"
  9. EOF

生成 kubeconfig

  1. kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kube-scheduler.kubeconfig
  2. kubectl config set-credentials kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
  3. kubectl config set-context default --cluster=kubernetes --user=kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
  4. kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
  5. sudo cp kube-scheduler.kubeconfig /etc/kubernetes/

编写服务启动脚本

  1. sudo cat > /lib/systemd/system/kube-scheduler.service << "EOF"
  2. [Unit]
  3. Description=Kubernetes scheduler
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target network-online.target
  6. Wants=network-online.target
  7. [Service]
  8. EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
  9. ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
  10. Restart=on-failure
  11. RestartSec=5
  12. LimitNOFILE=65535
  13. [Install]
  14. WantedBy=multi-user.target
  15. EOF

启动 kube-scheduler 服务

  1. systemctl daemon-reload
  2. systemctl enable --now kube-scheduler
  3. # 验证结果
  4. systemctl status kube-scheduler
  5. # 查看日志
  6. journalctl -u kube-scheduler

查看组件状态

  1. kubectl get cs
  2. Warning: v1 ComponentStatus is deprecated in v1.19+
  3. NAME STATUS MESSAGE ERROR
  4. scheduler Healthy ok
  5. controller-manager Healthy ok
  6. etcd-0 Healthy {"health":"true","reason":""}

9 部署 kubelet

master 节点上部署 kubelet 是可选的,一旦部署 kubelet,master 节点也可以运行 Pod,如果不希望 master 节点上运行 Pod,则可以给 master 节点打上污点。

master 节点部署 kubelet 是有好处的,一是可以通过诸如 kubectl get node 等命令查看节点信息,二是可以在上面部署监控系统,日志采集系统等。

9.1 生成 kubeconfig

  1. kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kubelet.kubeconfig
  2. kubectl config set-credentials kubelet-bootstrap --token=$(awk -F, '{print $1}' /etc/kubernetes/token.csv) --kubeconfig=kubelet.kubeconfig
  3. kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet.kubeconfig
  4. kubectl config use-context default --kubeconfig=kubelet.kubeconfig
  5. sudo cp kubelet.kubeconfig /etc/kubernetes/

9.2 部署 kubelet

编写服务配置文件

  1. sudo cat > /etc/kubernetes/kubelet.conf << EOF
  2. KUBELET_OPTS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  3. --config=/etc/kubernetes/kubelet.yaml \
  4. --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  5. --cert-dir=/etc/kubernetes/pki \
  6. --network-plugin=cni \
  7. --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 \
  8. --logtostderr=false \
  9. --v=4 \
  10. --log-dir=/var/log/kubernetes \
  11. --fail-swap-on=false"
  12. EOF
  13. sudo cat > /etc/kubernetes/kubelet.yaml << EOF
  14. kind: KubeletConfiguration
  15. apiVersion: kubelet.config.k8s.io/v1beta1
  16. address: 0.0.0.0
  17. port: 10250
  18. readOnlyPort: 0
  19. authentication:
  20. anonymous:
  21. enabled: false
  22. webhook:
  23. cacheTTL: 2m0s
  24. enabled: true
  25. x509:
  26. clientCAFile: /etc/kubernetes/pki/ca.pem
  27. authorization:
  28. mode: Webhook
  29. webhook:
  30. cacheAuthorizedTTL: 5m0s
  31. cacheUnauthorizedTTL: 30s
  32. cgroupDriver: systemd
  33. clusterDNS:
  34. - 10.96.0.10
  35. clusterDomain: cluster.local
  36. healthzBindAddress: 127.0.0.1
  37. healthzPort: 10248
  38. rotateCertificates: true
  39. evictionHard:
  40. imagefs.available: 15%
  41. memory.available: 100Mi
  42. nodefs.available: 10%
  43. nodefs.inodesFree: 5%
  44. maxOpenFiles: 1000000
  45. maxPods: 110
  46. EOF

生成 kubeconfig

  1. kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
  2. kubectl config set-credentials kubelet-bootstrap --token=$(awk -F, '{print $1}' /etc/kubernetes/token.csv) --kubeconfig=kubelet-bootstrap.kubeconfig
  3. kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
  4. kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
  5. sudo cp kubelet-bootstrap.kubeconfig /etc/kubernetes/

编写服务启动脚本

  1. sudo cat > /lib/systemd/system/kubelet.service << "EOF"
  2. [Unit]
  3. Description=Kubernetes kubelet
  4. After=network.target network-online.targer docker.service
  5. Wants=docker.service
  6. [Service]
  7. EnvironmentFile=-/etc/kubernetes/kubelet.conf
  8. ExecStart=/usr/local/bin/kubelet $KUBELET_OPTS
  9. Restart=on-failure
  10. RestartSec=5
  11. LimitNOFILE=65535
  12. [Install]
  13. WantedBy=multi-user.target
  14. EOF

启动 kubelet 服务

  1. systemctl daemon-reload
  2. systemctl enable --now kubelet
  3. # 验证结果
  4. systemctl status kubelet
  5. # 查看日志
  6. journalctl -u kubelet

批准节点加入集群

  1. kubectl get csr
  2. NAME AGE SIGNERNAME REQUESTOR CONDITION
  3. csr-nhjj4 87s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
  4. kubectl certificate approve csr-nhjj4
  5. kubectl get csr
  6. NAME AGE SIGNERNAME REQUESTOR CONDITION
  7. csr-nhjj4 2m10s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued

查看节点

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. ubuntu-k8s-master-01 NotReady <none> 2m40s v1.23.1
  4. # 此时节点状态还是 NotReady,因为还没有安装网络插件,正确安装网络插件后,状态会变为 Ready.

10 部署 kube-proxy

10.1 颁发证书

  1. # kube-proxy 证书签署申请
  2. cat > kube-scheduler-csr.json << EOF
  3. {
  4. "CN": "system:kube-proxy",
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "ST": "Guangdong",
  13. "L": "Zhuhai",
  14. "O": "k8s",
  15. "OU": "system"
  16. }
  17. ]
  18. }
  19. EOF
  20. # 签署 kube-proxy 证书
  21. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  22. # 验证结果,会生成两个证书文件
  23. ll kube-proxy*pem
  24. -rw------- 1 haxi haxi 1679 Dec 31 10:26 kube-proxy-key.pem
  25. -rw-rw-r-- 1 haxi haxi 1407 Dec 31 10:26 kube-proxy.pem
  26. # 复制 kube-proxy 证书到 /etc/kubernetes/pki
  27. sudo cp kube-proxy*pem /etc/kubernetes/pki

8.2 部署 kube-proxy

编写服务配置文件

  1. sudo cat > /etc/kubernetes/kube-proxy.conf << EOF
  2. KUBE_PROXY_OPTS="--config=/etc/kubernetes/kube-proxy.yaml \
  3. --logtostderr=false \
  4. --v=4 \
  5. --log-dir=/var/log/kubernetes"
  6. EOF
  7. sudo cat > /etc/kubernetes/kube-proxy.yaml << EOF
  8. kind: KubeProxyConfiguration
  9. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  10. clientConnection:
  11. kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  12. bindAddress: 0.0.0.0
  13. clusterCIDR: 10.244.0.0/12
  14. healthzBindAddress: 0.0.0.0:10256
  15. metricsBindAddress: 0.0.0.0:10249
  16. mode: ipvs
  17. ipvs:
  18. scheduler: "rr"
  19. EOF

生成 kubeconfig

  1. kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kube-proxy.kubeconfig
  2. kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
  3. kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
  4. kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  5. sudo cp kube-proxy.kubeconfig /etc/kubernetes/

编写服务启动脚本

  1. sudo cat > /lib/systemd/system/kube-proxy.service << "EOF"
  2. [Unit]
  3. Description=Kubernetes Proxy
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target network-online.target
  6. Wants=network-online.target
  7. [Service]
  8. EnvironmentFile=-/etc/kubernetes/kube-proxy.conf
  9. ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_OPTS
  10. Restart=on-failure
  11. RestartSec=5
  12. LimitNOFILE=65535
  13. [Install]
  14. WantedBy=multi-user.target
  15. EOF

启动 kube-proxy 服务

  1. systemctl daemon-reload
  2. systemctl enable --now kubelet
  3. # 验证结果
  4. systemctl status kube-proxy
  5. # 查看日志
  6. journalctl -u kube-proxy

11 部署 calico

参考地址 calico

  1. curl https://docs.projectcalico.org/manifests/calico.yaml -O
  2. # 修改 Pod IP 地址段,找到 CALICO_IPV4POOL_CIDR 变量,取消注释并修改如下
  3. - name: CALICO_IPV4POOL_CIDR
  4. value: "10.244.0.0/12"
  5. kubectl apply -f calico.yaml

12 部署 coredns

参考地址 coredns

下载 yaml 文件,做以下修改:

  • CLUSTER_DOMAIN 改为 cluster.local
  • REVERSE_CIDRS 改为 in-addr.arpa ip6.arpa
  • UPSTREAMNAMESERVER 改为 /etc/resolv.conf,如果报错,则改成当前网络所使用的 DNS 地址
  • 删除 STUBDOMAINS
  • CLUSTER_DNS_IP 改为 10.96.0.10(应与 /etc/kubernetes/kubelet.yaml 中配置的clusterDNS保持一致)
  1. kubectl apply -f coredns.yaml

验证

  1. kubectl -n kube-system get pod
  2. NAME READY STATUS RESTARTS AGE
  3. calico-kube-controllers-647d84984b-xh55r 1/1 Running 0 41m
  4. calico-node-d9jqp 1/1 Running 0 40m
  5. coredns-f89fb968f-sxq4m 1/1 Running 0 30m
  6. kubectl get node
  7. NAME STATUS ROLES AGE VERSION
  8. ubuntu-k8s-master-01 Ready <none> 5d23h v1.23.1

13 添加 worker 节点

worker 节点需要部署两个组件 kubelet, kube-proxy.

从 master 节点上复制以下几个文件到 worker 节点

  • /etc/kubernetes/pki/ca.pem
  • /etc/kubernetes/kubelet-bootstrap.kubeconfig
  • /etc/kubernetes/kubelet.yaml
  • /etc/kubernetes/kubelet.conf
  • /lib/systemd/system/kubelet.service
  • /etc/kubernetes/pki/kube-proxy-key.pem
  • /etc/kubernetes/pki/kube-proxy.pem
  • /etc/kubernetes/kube-proxy.conf
  • /etc/kubernetes/kube-proxy.yaml
  • /lib/systemd/system/kube-proxy.service

复制 kubelet, kube-proxy 二进制程序到 /usr/local/bin

启动 kube-proxy 服务

  1. systemctl daemon-reload
  2. systemctl enable --now kube-proxy
  3. # 验证结果
  4. systemctl status kube-proxy
  5. # 查看日志
  6. journalctl -u kube-proxy

启动 kubelet 服务

  1. systemctl daemon-reload
  2. systemctl enable --now kubelet
  3. # 验证结果
  4. systemctl status kubelet
  5. # 查看日志
  6. journalctl -u kubelet

批准节点加入集群

  1. kubectl get csr
  2. NAME AGE SIGNERNAME REQUESTOR CONDITION
  3. csr-xxxxx 87s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
  4. kubectl certificate approve csr-xxxxx
  5. kubectl get csr
  6. NAME AGE SIGNERNAME REQUESTOR CONDITION
  7. csr-xxxxx 2m10s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued

查看节点

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. ubuntu-k8s-master-01 Ready <none> 2m40s v1.23.1
  4. ubuntu-k8s-worker-01 Ready <none> 2m40s v1.23.1

后记

至此 1 master 1 worker 的 k8s 二进制集群已搭建完毕。

此外,还可以给节点打上角色标签,使得查看节点信息更加直观

  1. # 给 master 节点打上 controlplane,etcd 角色标签
  2. kubectl label node ubuntu-k8s-master-01 node-role.kubernetes.io/controlplane=true ode-role.kubernetes.io/etcd=true
  3. # 给 worker 节点打上 worker 角色标签
  4. kubectl label node ubuntu-k8s-worker-01 node-role.kubernetes.io/worker=true

如果不希望 master 节点运行 Pod,则给 master 打上污点

  1. kubectl taint node ubuntu-k8s-master-01 node-role.kubernetes.io/controlplane=true:NoSchedule

接下来我将新增 2 个 etcd 节点组成 etcd 集群,新增 2 个控制平面,避免单点故障。

标签: none

已有 5 条评论

  1. luo luo

    kube-apiserver中创建证书签名申请文件时名字写错了,写成了etcd-csr.json
    ➜ cat > etcd-csr.json

  2. jerbin jerbin

    部署 kube-proxy章节,kube-scheduler-csr.json也写错了,应该是kube-proxy-csr.json

  3. jerbin jerbin

    部署 kube-scheduler
    sudo cat /etc/kubernetes/kube-scheduler.conf ,应该是
    sudo cat > /etc/kubernetes/kube-scheduler.conf

  4. jerbin jerbin

    启动 kube-proxy 服务
    systemctl enable --now kubelet
    应该是
    systemctl enable --now kube-proxy

  5. [...]二进制部署 K8s 集群 1.23.1 版本 - 陈日志[...]

添加新评论