kubeadm部署kubernetes 1.16.1 HA高可用集群

  sre

环境准备:

计划先部署3master并开启调度,后续再增加worker
操作系统CentOS Linux release 7.6.1810 (Core)
Docker版本18.09.3
网络规划如下:

172.16.0.5  k8s01
172.16.0.10 k8s02
172.16.0.16 k8s03
172.16.0.20 k8s-vip

通用配置优化

内核升级并开启ipvs

内核升级并开启ipvs教程

防火墙

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

swapoff

swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

iptables的FORWARD规则

echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
""" > /etc/sysctl.conf
sysctl -p

时间与时区

timedatectl set-timezone Asia/Shanghai
ntpdate -u time1.aliyun.com

设置免密登录

ssh-keygen #三次回车
ssh-copy-id -i ~/.ssh/id_rsa.pub  k8s01
ssh-copy-id -i ~/.ssh/id_rsa.pub  k8s02
ssh-copy-id -i ~/.ssh/id_rsa.pub  k8s03

安装优化docker

安装优化docker文档

安装keepalived、haproxy高可用

安装keepalived文档
安装配置haproxy文档

安装kubernetes

安装kubernetes相关组件

repo配置

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装组件

yum install -y kubelet kubeadm kubectl ebtables
systemctl enable kubelet

准备镜像

需要哪些镜像:

[root@k8s01 ~]# kubeadm config images list
W1014 09:51:10.863672   16026 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.i
o/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)W1014 09:51:10.863898   16026 version.go:102] falling back to the local client version: v1.16.1
k8s.gcr.io/kube-apiserver:v1.16.1
k8s.gcr.io/kube-controller-manager:v1.16.1
k8s.gcr.io/kube-scheduler:v1.16.1
k8s.gcr.io/kube-proxy:v1.16.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2

下载镜像

[root@k8s01 ~]# kubeadm config images pull --config kubeadm-init.yaml
W1014 09:53:08.496306   16046 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"InitConfiguration"}: error unmar
shaling JSON: while decoding JSON: json: unknown field "controlPlaneEndpoint"[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2

kubeadm初始化

kubeadm init:

kubeadm init \
--apiserver-advertise-address=172.16.0.5 \
--apiserver-bind-port=6443 \
--control-plane-endpoint=172.16.0.20:16443 \
--kubernetes-version v1.16.1 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--service-cidr=10.0.0.0/12

输出信息,请保存下来:

You can now join any number of control-plane nodes by copying certificate authorities 
and service account keys on each node and then running the following as root:

  kubeadm join 172.16.0.20:16443 --token mpzc7f.37mtclb6ruqdghj0 \
    --discovery-token-ca-cert-hash sha256:69e8ed5c254a07b54076fb015707336b487aa7e0373f98ad00cdcf17854e945c \
    --control-plane       

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.0.20:16443 --token mpzc7f.37mtclb6ruqdghj0 \
    --discovery-token-ca-cert-hash sha256:69e8ed5c254a07b54076fb015707336b487aa7e0373f98ad00cdcf17854e945c 

复制集群config,配置kubectl工具

mkdir -p HOME/.kube
/bin/cp -arf   /etc/kubernetes/admin.confHOME/.kube/config
chown (id -u):(id -g) $HOME/.kube/config

查看集群信息

[root@k8s01 ~]# kubectl cluster-info
Kubernetes master is running at https://172.16.0.20:16443
KubeDNS is running at https://172.16.0.20:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
[root@k8s01 ~]# kubectl get nodes
NAME    STATUS     ROLES    AGE    VERSION
k8s01   NotReady   master   3m1s   v1.16.1
[root@k8s01 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-67c766df46-km2bp        0/1     Pending   0          3m5s
kube-system   coredns-67c766df46-w5nz6        0/1     Pending   0          3m5s
kube-system   etcd-k8s01                      1/1     Running   0          2m3s
kube-system   kube-apiserver-k8s01            1/1     Running   0          2m16s
kube-system   kube-controller-manager-k8s01   1/1     Running   0          2m13s
kube-system   kube-proxy-vhb48                1/1     Running   0          3m5s
kube-system   kube-scheduler-k8s01            1/1     Running   0          2m10s

设置calico网络

master开启调度:

kubectl taint nodes --all node-role.kubernetes.io/master-

calico官方文档

kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
watch kubectl get pods --all-namespaces

calico容器初始化完成时即可:

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-b7fb7899c-cx2f9   1/1     Running   0          27s
kube-system   calico-node-rpnvq                         1/1     Running   0          27s
kube-system   coredns-67c766df46-km2bp                  1/1     Running   0          5m40s
kube-system   coredns-67c766df46-w5nz6                  1/1     Running   0          5m40s
kube-system   etcd-k8s01                                1/1     Running   0          4m38s
kube-system   kube-apiserver-k8s01                      1/1     Running   0          4m51s
kube-system   kube-controller-manager-k8s01             1/1     Running   0          4m48s
kube-system   kube-proxy-vhb48                          1/1     Running   0          5m40s
kube-system   kube-scheduler-k8s01                      1/1     Running   0          4m45s

处理多master

传输证书文件到其他master

ssh k8s02 rm -rf /etc/kubernetes ; ssh k8s02 mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf k8s02:/etc/kubernetes/
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} k8s02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* k8s02:/etc/kubernetes/pki/etcd/


ssh k8s03 rm -rf /etc/kubernetes ; ssh k8s03 mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf k8s03:/etc/kubernetes/
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} k8s03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* k8s03:/etc/kubernetes/pki/etcd/

其余master节点加入集群:

  kubeadm join 172.16.0.20:16443 --token mpzc7f.37mtclb6ruqdghj0 \
    --discovery-token-ca-cert-hash sha256:69e8ed5c254a07b54076fb015707336b487aa7e0373f98ad00cdcf17854e945c \
    --control-plane 

再在k8s01上执行master开启调度:

kubectl taint nodes --all node-role.kubernetes.io/master-

在其他master上配置下kubectl

mkdir -p HOME/.kube
/bin/cp -arf   /etc/kubernetes/admin.confHOME/.kube/config
chown (id -u):(id -g) $HOME/.kube/config

加入worker节点:

kubeadm join 172.16.0.20:16443 --token mpzc7f.37mtclb6ruqdghj0 \
    --discovery-token-ca-cert-hash sha256:69e8ed5c254a07b54076fb015707336b487aa7e0373f98ad00cdcf17854e945c 

注意**:token有效期是有限的,如果旧的token过期,可以在master节点上使用kubeadm token create --print-join-command重新创建一条token。


LEAVE A COMMENT

Captcha Code