kubernetes升级到1.14.2
变化不大,其中销毁状态pod继续显示的bug已修复:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#v1142
升级kubeadm,先挂代理:
export http_proxy='http://192.168.2.2:1080'
export https_proxy='http://192.168.2.2:1080'
export ftp_proxy='http://192.168.2.2:1080'
在各节点上都升级下kubelet kubeadm kubectl:
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
查看升级计划:
[root@node01 ~]# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.1
[upgrade/versions] kubeadm version: v1.14.2
[upgrade/versions] Latest stable version: v1.14.2
[upgrade/versions] Latest version in the v1.14 series: v1.14.2
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 4 x v1.14.1 v1.14.2
Upgrade to the latest version in the v1.14 series:
COMPONENT CURRENT AVAILABLE
API Server v1.14.1 v1.14.2
Controller Manager v1.14.1 v1.14.2
Scheduler v1.14.1 v1.14.2
Kube Proxy v1.14.1 v1.14.2
CoreDNS 1.3.1 1.3.1
Etcd 3.3.10 3.3.10
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.14.2
_____________________________________________________________________
下包
#科学取包
docker pull gcr.io/google-containers/kube-apiserver:v1.14.2
docker pull gcr.io/google-containers/kube-scheduler:v1.14.2
docker pull gcr.io/google-containers/kube-proxy:v1.14.2
docker pull gcr.io/google-containers/kube-controller-manager:v1.14.2
#搬运回来,tag:
docker save gcr.io/google-containers/kube-proxy:v1.14.2 >kube-proxy
docker save gcr.io/google-containers/kube-scheduler:v1.14.2 >kube-scheduler
docker save gcr.io/google-containers/kube-controller-manager:v1.14.2 >kube-controller-manager
docker save gcr.io/google-containers/kube-apiserver:v1.14.2 >kube-apiserver
docker load < kube-proxy
docker load < kube-scheduler
docker load < kube-controller-manager
docker load < kube-apiserver
docker tag gcr.io/google-containers/kube-apiserver:v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
docker tag gcr.io/google-containers/kube-scheduler:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2
docker tag gcr.io/google-containers/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
docker tag gcr.io/google-containers/kube-controller-manager:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2
升级
[root@node01 ~]# kubeadm upgrade apply v1.14.2
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.14.2"
[upgrade/versions] Cluster version: v1.14.1
[upgrade/versions] kubeadm version: v1.14.2
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.2"...
Static pod: kube-apiserver-node01 hash: 62b27594033ef04996c1fb72709b03f9
Static pod: kube-controller-manager-node01 hash: f4e6a574ceea76f0807a77e19a4d3b6c
Static pod: kube-scheduler-node01 hash: f44110a0ca540009109bfc32a7eb0baa
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests574017419"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-04-39-26/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-node01 hash: 62b27594033ef04996c1fb72709b03f9
Static pod: kube-apiserver-node01 hash: ad0f7543d9a791f0cff21be0307e880f
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[apiclient] Found 2 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-04-39-26/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-node01 hash: f4e6a574ceea76f0807a77e19a4d3b6c
Static pod: kube-controller-manager-node01 hash: cb5ccbdd9722ba2e71b14f0c7c23e891
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-04-39-26/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-node01 hash: f44110a0ca540009109bfc32a7eb0baa
Static pod: kube-scheduler-node01 hash: 69b22202d494dc2370822f3ded0e66c1
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.2". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.