Kubernetes

脱机二进制部署 Kubernetes 生产集群

目录

1. 架构基础

1.1 Master 组件

1.1.1 etcd

  • 为 kube-apiserver 提供数据存储、watch 服务。

1.1.2 kube-apiserver

  • 整个集群的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制。

1.1.2 kube-controller-manager

  • 负责维护集群的状态,比如故障检测、自动扩展、滚动更新等。保证资源到达期望值。

1.1.3 kube-scheduler

  • 负责调度 POD 到合适的节点上面运行,分别有预选策略和优选策略。

1.2 Node 组件

1.2.1 kubelet

  • 在集群节点上运行的代理,kubelet 会通过各种机制来确保容器处于运行状态且健康。kubelet 不会管理不是由 kubernetes 创建的容器。kubelet 接收 POD 的期望状态(副本数、镜像、网络等)
    ,并调用容器运行环境来实现预期状态。kubelet 会定时汇报节点的状态给apiserver,作为 kube-scheduler 调度的基础。kubelet 会对镜像和容器进行清理,避免不必要的文件资源占用。

1.2.2 kube-proxy

  • kube-proxy 是集群中节点上运行的网络代理,是实现 Service 资源功能组件之一。kube-proxy 建立了 POD 网络和集群网络之间的关系,不同 Node 上的 Service 流量转发规则会通过 kube-proxy 来调用 kube-apiserver 访问 etcd 进行规则更新。Service 流量调度方式有三种方式:userspace(废弃,性能很差)、iptables(性能差,复杂,即将废弃)、
    ipvs(性能好,转发方式清晰)。

1.2.3 coredns (推荐)

  • 负责集群 各 Service 的 Pod IP 地址的 DNS 解析。

1.2.4 flannelcliumcalico (推荐, 成熟稳定)

  • 实现 cni,为集群 Pod 提供网络资源。

1.3 Istio 组件

1.3.1 enovy

  • 为 Pod 以 sidechar 形式提供流量输入输出。

1.3.2 部署 bookinfo 异构微服务

  • TODO

2. Kubernetes 生产集群部署 (规范命名、路径等)

2.1 部署拓扑图

  • Cluster Name: cn-south1-k8s-t1,(即 China South1 Kubernetes Test Cluster 1 的缩写),遵循如 AwsGcsAliyun 等云厂商多数据中心命名规范
  • Cluster Nodes Naming: <ClusterName>.<NodeIP>, for display kubectl get nodes
IP Host kubelet (--hostname-override) Core Compoents
10.0.0.121 k8s-master-1 cn-south1-k8s-t1.10.0.0.121 etcd1 / coredns / kube-apiserver / kube-controller-manager / kube-scheduler
10.0.0.122 k8s-master-2 cn-south1-k8s-t1.10.0.0.122 etcd2 / coredns / kube-apiserver / kube-controller-manager / kube-scheduler
10.0.0.123 k8s-master-3 cn-south1-k8s-t1.10.0.0.123 etcd3 / coredns / kube-apiserver / kube-controller-manager / kube-scheduler
10.0.0.124 k8s-worker-1 cn-south1-k8s-t1.10.0.0.124 kubelet / kube-proxy / calico or flannel or clium
10.0.0.125 k8s-worker-2 cn-south1-k8s-t1.10.0.0.125 kubelet / kube-proxy / calico or flannel or clium

2.2 系统配置

2.2.1 系统要求

2.2.2 配置系统固定 IP (可选,若是可能虚拟机需要,物理机一般机房会安排好)

  • 如 Ubuntu 20 为例,以下配置第一台的 IP,其他机器依次执行,具体请视实际情况而改。
# 备份
sudo cp /etc/netplan/01-network-manager-all.yaml /etc/netplan/01-network-manager-all.yaml.bak

# 下载
sudo curl -4sSkL -o /etc/netplan/01-network-manager-all.yaml https://gitee.com/wl4g/blogs/raw/master/articles/kubernetes/kubernetes-offline-binary-production-deployment/resources/etc/netplan/01-network-manager-all.yaml

# 使生效
sudo netplan apply --debug
  • 如 CentOS 7 为例,以下配置第一台的 IP,其他机器依次执行,具体请视实际情况而改。
# 备份
sudo cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0.bak

# 下载
sudo curl -4sSkL -o /etc/sysconfig/network-scripts/ifcfg-eth0 https://gitee.com/wl4g/blogs/raw/master/articles/kubernetes/kubernetes-offline-binary-production-deployment/resources/etc/sysconfig/network-scripts/ifcfg-eth0

# 使生效
sudo systemctl restart network

2.2.3 SSH 免密 (仅为部署方便)

  • 所有 Master 之间、所有 Master 到所有 Nodes 免密即可

  • TODO

2.2.4 所有节点安装 chrony 并配置时钟同步

2.2.5 kernel 调优

sudo curl -4sSkL -o /etc/sysctl.d/99-kube.conf https://gitee.com/wl4g/blogs/raw/master/articles/kubernetes/kubernetes-offline-binary-production-deployment/resources/etc/sysctl.d/99-kube.conf

# 使生效
sudo sysctl -p

# 关闭 swap
sudo swapoff -a
sudo cp /etc/fstab /etc/fstab.bak
sudo sed -i '/swap/d' /etc/fstab # remove swap line.

2.3 各节点部署 docker

2.4 各节点部署 etcd

2.5 下载 kubernetes 二进制包并安装

# 创建安装目录
sudo mkdir -p /usr/lib/kubernetes-current

# 环境配置
sudo cat <<-'EOF' >/etc/profile.d/profile-kubernetes.sh
#!/bin/bash
# Copyright (c) 2017 ~ 2025, the original author wangl.sir individual Inc,
# All rights reserved. Contact us <wanglsir@gmail.com, 983708408@qq.com>
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
export KUBERNETES_HOME=/usr/lib/kubernetes-current
export PATH=$PATH:$KUBERNETES_HOME:
EOF

# 使生效
. /etc/profile

# 下载安装包 (https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#downloads-for-v1222)
sudo cd $KUBERNETES_HOME
sudo curl -o kubernetes-client-linux-arm64.tar.gz -L https://dl.k8s.io/v1.22.2/kubernetes-client-linux-arm64.tar.gz
sudo curl -o kubernetes-server-linux-arm64.tar.gz -L https://dl.k8s.io/v1.22.2/kubernetes-server-linux-arm64.tar.gz
sudo curl -o kubernetes-node-linux-arm64.tar.gz -L https://dl.k8s.io/v1.22.2/kubernetes-node-linux-arm64.tar.gz
sudo tar -xf kubernetes-client-linux-arm64.tar.gz
sudo tar -xf kubernetes-server-linux-arm64.tar.gz
sudo tar -xf kubernetes-node-linux-arm64.tar.gz
sudo mv kubernetes/client/bin/* .
sudo mv kubernetes/server/bin/* .
sudo mv kubernetes/node/bin/* .

# 整理
sudo mkdir images; sudo mv *.tar *.docker_tag images
sudo rm -rf kubernetes

# 软链二进制文件
for f in `ls $KUBERNETES_HOME/`; do [ -f "$f" ] && sudo ln -snf $KUBERNETES_HOME/$f /usr/bin/$f; done

2.6 各节点部署 kubadm

  • TODO

2.7 主节点部署 kube-apiserver

2.7.1 自签 kube-apiserver 证书

  • /etc/kubernetes/ssl
sudo curl -L -o /bin/cfssl https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
sudo curl -L -o /bin/cfssljson https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64

sudo chmod +x /bin/cfssl
sudo chmod +x /bin/cfssljson
# sudo apt install golang-cfssl

sudo mkdir -p /etc/kubernetes/ssl
sudo cd /etc/kubernetes/ssl

# Generating config.
sudo cat <<-'EOF' >config.json
{"signing":{"default":{"expiry":"87600h"},"profiles":{"cn-south1-k8s-t1":{"usages":["signing","key encipherment","server auth","client auth"],"expiry":"87600h"}}}}
EOF

# Generating CA certificate singing request config.
sudo cat <<-'EOF' >ca-csr.json
{"CN":"WL4G Root CA cert issuer","CA":{"expiry":"87600h","pathlen":0},"key":{"algo":"rsa","size":2048},"names":[{"C":"US","L":"San Francisco 12th street","O":"WL4G company, Inc.","OU":"www dept","ST":"California"}]}
EOF

# Generating apiserver certificate singing request config.
sudo cat <<-'EOF' >apiserver-csr.json
{"hosts":["127.0.0.1","192.168.0.1","10.0.0.121","10.0.0.122","10.0.0.123","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local","https://k8s.wl4gcs.com","https://n1.k8s.wl4gcs.com","https://n2.k8s.wl4gcs.com","https://n3.k8s.wl4gcs.com"],"CN":"wl4g.com","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","L":"GuangZhou 6th street","O":"SM, Inc.","OU":"WWW dept","ST":"GuangDong"}]}
EOF

# Generating apiserver client certificate singing request config.
sudo cat <<-'EOF' >apiserver-client-csr.json
{"hosts":[],"CN":"wl4g.com","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","L":"GuangZhou 6th street","O":"SM, Inc.","OU":"WWW dept","ST":"GuangDong"}]}
EOF

# Generating CA certificate.
sudo cfssl genkey -initca ca-csr.json | cfssljson -bare ca

# Generating apiserver certificate.
sudo cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=cn-south1-k8s-t1 apiserver-csr.json | cfssljson -bare apiserver

# Generating apiserver client certificate.
sudo cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=cn-south1-k8s-t1 apiserver-client-csr.json | cfssljson -bare apiserver-client

# Print CA and apiserver and client certificate.
openssl x509 -in ca.pem -noout -text
openssl x509 -in apiserver.pem -noout -text
openssl x509 -in apiserver-client.pem -noout -text
# or
openssl x509 -noout -text -in <(cat *-client-*.pem)
# or
cat *-client-*.pem | openssl x509 -noout -text

# Copy to master/worker nodes.
sudo ssh k8s@k8s-master-2 "sudo mkdir -p /etc/kubernetes/ssl"
sudo ssh k8s@k8s-master-3 "sudo mkdir -p /etc/kubernetes/ssl"
sudo ssh k8s@k8s-worker-1 "sudo mkdir -p /etc/kubernetes/ssl"
sudo ssh k8s@k8s-worker-2 "sudo mkdir -p /etc/kubernetes/ssl"

sudo scp -r *.pem k8s@k8s-master-2:/etc/kubernetes/ssl
sudo scp -r *.pem k8s@k8s-master-3:/etc/kubernetes/ssl

sudo scp -r *-client-*.pem k8s@k8s-worker-1:/etc/kubernetes/ssl
sudo scp -r ca.pem k8s@k8s-worker-1:/etc/kubernetes/ssl

sudo scp -r *-client-*.pem k8s@k8s-worker-2:/etc/kubernetes/ssl
sudo scp -r ca.pem k8s@k8s-worker-2:/etc/kubernetes/ssl

2.7.2 配置 kube-apiserver systemd

  • /etc/systemd/system/kube-apiserver.service
sudo cat <<-'EOF' > /etc/systemd/system/kube-apiserver.service
[Unit]
Description=kubernetes API Server
Documentation=https://v1.22.docs.kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-apiserver/
After=network.target

[Service]
ExecStart=/usr/bin/kube-apiserver \
  --audit-log-maxbackup=10 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kubernetes/kubernetes.audit \
  --audit-log-maxage=30 \
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \
  --apiserver-count=3 \
  --endpoint-reconciler-type=lease \
  --enable-aggregator-routing=true \
  --runtime-config=admissionregistration.k8s.io/v1 \
  --advertise-address=10.0.0.121 \
  --allow-privileged=true \
  --authorization-mode=Node,RBAC \
  --authorization-mode=AlwaysAllow \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NamespaceExists,MutatingAdmissionWebhook,ValidatingAdmissionWebhook \
  --enable-bootstrap-token-auth=true \
  --token-auth-file=/etc/kubernetes/token.csv \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-prefix=cn-south1-k8s-t1 \
  --etcd-servers=https://10.0.0.121:2379,https://10.0.0.122:2379,https://10.0.0.123:2379 \
  --insecure-port=0 \
  --kubelet-client-certificate=/etc/kubernetes/ssl/k8s.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/k8s-key.pem \
  --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
  --profiling=false \
  --proxy-client-cert-file=/etc/kubernetes/ssl/k8s.pem \
  --proxy-client-key-file=/etc/kubernetes/ssl/k8s-key.pem \
  --requestheader-allowed-names=sunwuu.com \
  --requestheader-client-ca-file=/etc/kubernetes/ssl/k8s.pem \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --secure-port=6443 \
  --service-account-issuer=kubernetes.default.svc \
  --service-account-key-file=/etc/kubernetes/ssl/k8s.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/k8s-key.pem \
  --service-cluster-ip-range=192.168.0.0/16 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/etc/kubernetes/ssl/k8s.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/k8s-key.pem \
  --v=3

User=root
Group=root
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl restart kube-apiserver
sudo systemctl status kube-apiserver

2.8 主节点部署 kube-controller-manager

2.8.1 配置 kube-controller-manager systemd

  • /etc/systemd/system/kube-controller-manager.service
sudo cat <<-'EOF' > /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=kubernetes API Server
Documentation=https://v1.22.docs.kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-controller-manager/
After=network.target

[Service]
ExecStart=/usr/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --master=http://10.0.0.121:8080 \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=10.254.0.0/16 \
  --cluster-cidr=10.233.0.0/16 \
  --cluster-name=cn-south1-k8s-t1 \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/k8s-key.pem \
  --root-ca-file=/etc/kubernetes/ssl/k8s.pem \
  --leader-elect=true \
  --v=3

User=root
Group=root
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager
sudo systemctl restart kube-controller-manager
sudo systemctl status kube-controller-manager

2.9 主节点部署 kube-scheduler

2.9.1 配置 kube-scheduler systemd

  • /etc/systemd/system/kube-scheduler.service
sudo cat <<-'EOF' > /etc/systemd/system/kube-scheduler.service
[Unit]
Description=kubernetes API Server
Documentation=https://v1.22.docs.kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-scheduler/
After=network.target

[Service]
ExecStart=/usr/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://10.0.0.121:8080 \
  --leader-elect=true \
  --v=3

User=root
Group=root
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler
sudo systemctl restart kube-scheduler
sudo systemctl status kube-scheduler

2.10 主节点部署 coredns

2.10.1 配置 coredns systemd

  • /etc/systemd/system/coredns.service
sudo cat <<-'EOF' > /etc/systemd/system/coredns.service
[Unit]
Description=CoreDNS Server Service
After=network.target

[Service]
Type=fork
User=root
Group=root
Restart=always
RestartSec=5s
ExecStart=bash -c "/usr/bin/coredns -conf /etc/coredns/Corefile"
ExecReload=/bin/kill -s HUP $MAINPID
StandardOutput=/mnt/disk1/log/coredns/coredns.out
StandardError=journal

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable coredns
sudo systemctl restart coredns
sudo systemctl status coredns

2.11 各节点部署 kubelet

2.11.1 自签 kubelet 证书

  • /etc/kubernetes/ssl
sudo curl -o /bin/cfssl -L https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
sudo curl -o /bin/cfssljson -L https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
sudo chmod +x /bin/cfssl
sudo chmod +x /bin/cfssljson
#或 sudo apt install golang-cfssl

sudo mkdir -p /etc/kubernetes/ssl; cd /etc/kubernetes/ssl

# Generating config.
sudo cat <<-'EOF' >config.json
{"signing":{"default":{"expiry":"87600h"},"profiles":{"k8s-cluster-t1":{"usages":["signing","key encipherment","server auth","client auth"],"expiry":"87600h"}}}}
EOF

# Generating CA certificate singing request config.
sudo cat <<-'EOF' >ca-csr.json
{"CN":"WL4G Root CA cert issuer","CA":{"expiry":"87600h","pathlen":0},"key":{"algo":"rsa","size":2048},"names":[{"C":"US","L":"San Francisco 12th street","O":"WL4G company, Inc.","OU":"www dept","ST":"California"}]}
EOF

# Generating k8s certificate singing request config.
sudo cat <<-'EOF' >k8s-csr.json
{"hosts":["10.0.0.121","10.0.0.122","10.0.0.123","k8s-master-1","k8s-master-2","k8s-master-3","https://k8s.wl4gcs.com","https://n1.k8s.wl4gcs.com","https://n2.k8s.wl4gcs.com","https://n3.k8s.wl4gcs.com","127.0.0.1"],"CN":"wl4g.com","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","L":"GuangZhou 6th street","O":"SM, Inc.","OU":"WWW dept","ST":"GuangDong"}]}
EOF

# Generating CA certificate.
sudo cfssl genkey -initca ca-csr.json | cfssljson -bare ca

# Generating k8s certificate.
sudo cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=k8s-cluster-t1 k8s-csr.json | cfssljson -bare k8s

# Print CA and etcd certificate.
sudo openssl x509 -in k8s-key.pem -noout -text
sudo openssl x509 -in k8s.pem -noout -text

# Copy to worker nodes.
sudo scp -r  /etc/kubernetes/ssl k8s-worker-1:/etc/kubernetes
sudo scp -r  /etc/kubernetes/ssl k8s-worker-2:/etc/kubernetes

2.11.2 配置 kubelet systemd

  • /etc/systemd/system/kubelet.service
sudo cat <<-'EOF' > /etc/systemd/system/kubelet.service
[Unit]
Description=kubernetes API Server
Documentation=https://v1.22.docs.kubernetes.io/zh/docs/reference/command-line-tools-reference/kubelet/
After=network.target

[Service]
/ExecStart=/usr/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
--kubeconfig=/etc/kubernetes/kubelet.conf \
--max-pods 64 \
--pod-max-pids 16384 \
--pod-manifest-path=/etc/kubernetes/manifests \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin \
--dynamic-config-dir=/etc/kubernetes/kubelet-config \
--enable-controller-attach-detach=true \
--cluster-dns=191.168.0.10 \
--pod-infra-container-image=gcr.io/pause:3.5 \
--enable-load-reader \
--cluster-domain=cluster.local \
--hostname-override=cn-south1-k8s-t1.10.0.0.121 \
--authorization-mode=Webhook \
--authentication-token-webhook=true \
--anonymous-auth=false \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--cgroup-driver=systemd \
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
--tls-cert-file=/etc/kubernetes/ssl/k8s.pem \
--tls-private-key-file=/etc/kubernetes/ssl/k8s-key.pem \
--rotate-certificates=true \
--cert-dir=/etc/kubernetes/ssl/kubelet \
--system-reserved=memory=300Mi \
--kube-reserved=memory=400Mi \
--kube-reserved=pid=1000 \
--system-reserved=pid=1000 \
--v=3

User=root
Group=root
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl restart kubelet
sudo systemctl status kubelet

2.12 各节点部署 kube-proxy

2.12.1 配置 kube-proxy systemd

  • /etc/systemd/system/kube-proxy.service
sudo cat <<-'EOF' > /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
Documentation=https://v1.22.docs.kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-proxy/
After=network.target

[Service]
ExecStart=/usr/bin/kube-proxy \
  --bind-address=10.0.0.121 \
  --hostname-override=cn-south1-k8s-t1.10.0.0.121 \
  --cluster-cidr=10.254.0.0/16 \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
  --logtostderr=true \
  --v=3

User=root
Group=root
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl restart kube-proxy
sudo systemctl status kube-proxy

2.13 各节点部署 calico

2.13.1 配置 calico systemd

TODO

3. Istio 部署

  • 测试部署
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.12.0
cp bin/istioctl /usr/local/bin
istioctl install --set profile=demo -y

4. FAQ

  • How to view the data of kubernetes in etcd ?
# Print etcd all keys.
etcdctl get / --prefix --keys-only

# Gets kubernetes (cn-south1-k8s-t1) system node data output with hex.
etcdctl get /cn-south1-k8s-t1/clusterroles/system:node --hex

# Deletion kubernetes (cn-south1-k8s-t1) all data.
#etcdctl del --prefix /cn-south1-k8s-t1

留言

您的电子邮箱地址不会被公开。