利用 Kubeadm 部署 Kubernetes 1.13.1 集群实践录

2018-12-27 08:10:22 +08:00
 hansonwang99


概 述

Kubernetes 集群的搭建方法其实有多种,比如我在之前的文章[《利用 K8S 技术栈打造个人私有云(连载之:K8S 集群搭建)》]( https://www.codesheep.cn/2018/02/01/利用 K8S 技术栈打造个人私有云(连载之:K8S 集群搭建)/)中使用的就是二进制的安装方法。虽然这种方法有利于我们理解 k8s 集群,但却过于繁琐。而 kubeadm 是 Kubernetes 官方提供的用于快速部署 Kubernetes 集群的工具,其历经发展如今已经比较成熟了,利用其来部署 Kubernetes 集群可以说是非常好上手,操作起来也简便了许多,因此本文详细叙述之。


节点规划

本文准备部署一个 一主两从三节点 Kubernetes 集群,整体节点规划如下表所示:

主机名 | IP | 角色 |

k8s-master | 192.168.39.79 | k8s 主节点 |

k8s-node-1 | 192.168.39.77 | k8s 从节点 |

k8s-node-2 | 192.168.39.78 | k8s 从节点 |

下面介绍一下各个节点的软件版本:

所有节点都需要安装以下组件:


准备工作

systemctl disable firewalld.service 
systemctl stop firewalld.service
setenforce 0

vi /etc/selinux/config
SELINUX=disabled
swapoff -a
hostnamectl --static set-hostname  k8s-master
hostnamectl --static set-hostname  k8s-node-1
hostnamectl --static set-hostname  k8s-node-2

编辑 /etc/hosts文件,加入以下内容:

192.168.39.79 k8s-master
192.168.39.77 k8s-node-1
192.168.39.78 k8s-node-2

组件安装

0x01. Docker 安装(所有节点)

不赘述 ! ! !

0x02. kubelet、kubeadm、kubectl 安装(所有节点)

cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX= disabled/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet


Master 节点配置

0x01. 初始化 k8s 集群

为了应对网络不畅通的问题,我们国内网络环境只能提前手动下载相关镜像并重新打 tag :

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1           
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1  
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1           
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1               
docker rmi mirrorgooglecontainers/pause:3.1                        
docker rmi mirrorgooglecontainers/etcd:3.2.24                      
docker rmi coredns/coredns:1.2.6
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

然后再在 Master 节点上执行如下命令初始化 k8s 集群:

kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16

执行命令后,控制台给出了如下所示的详细集群初始化过程:

[root@localhost ~]# kubeadm init --config kubeadm-config.yaml
W1224 11:01:25.408209   10137 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "\u00a0 podSubnet ”
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull ’
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env ”
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml ”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki ”
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.39.79]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes ”
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests ”
[control-plane] Creating static Pod manifest for "kube-apiserver ”
[control-plane] Creating static Pod manifest for "kube-controller-manager ”
[control-plane] Creating static Pod manifest for "kube-scheduler ”
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests ”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005638 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system ” Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''”
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 26uprk.t7vpbwxojest0tvq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public ” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123

[root@localhost ~]#

0x02. 配置 kubectl

在 Master 上用 root 用户执行下列命令来配置 kubectl:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile 
echo $KUBECONFIG

0x03. 安装 Pod 网络

安装 Pod 网络是 Pod 之间进行通信的必要条件,k8s 支持众多网络方案,这里我们依然选用经典的 flannel 方案

sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f kube-flannel.yaml

kube-flannel.yaml 文件在此

一旦 Pod 网络安装完成,可以执行如下命令检查一下 CoreDNS Pod 此刻是否正常运行起来了,一旦其正常运行起来,则可以继续后续步骤

kubectl get pods --all-namespaces -o wide

同时我们可以看到主节点已经就绪:kubectl get nodes


添加 Slave 节点

在两个 Slave 节点上分别执行如下命令来让其加入 Master 上已经就绪了的 k8s 集群:

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

如果 token 忘记,则可以去 Master 上执行如下命令来获取:

kubeadm token list

上述 kubectl join 命令的执行结果如下:

[root@localhost ~]# kubeadm join 192.168.39.79:6443 --token yndddp.oamgloerxuune80q --discovery-token-ca-cert-hash sha256:7a45c40b5302aba7d8b9cbd3afc6d25c6bb8536dd6317aebcd2909b0427677c8
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.39.79:6443 ”
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.39.79:6443 ”
[discovery] Requesting info from "https://192.168.39.79:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.39.79:6443 ”
[discovery] Successfully established connection with API Server "192.168.39.79:6443 ”
[join] Reading configuration from the cluster …
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml ’
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml ”
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env ”
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap …
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

效果验证

kubectl get nodes

kubectl get pods --all-namespaces -o wide

好了,集群现在已经正常运行了,接下来看看如何正常的拆卸集群。


拆卸集群

首先处理各节点:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

一旦节点移除之后,则可以执行如下命令来重置集群:

kubeadm reset

安装 dashboard

就像给 elasticsearch 配一个可视化的管理工具一样,我们最好也给 k8s 集群配一个可视化的管理工具,便于管理集群。

因此我们接下来安装 v1.10.0版本的 kubernetes-dashboard,用于集群可视化的管理。

docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
kubectl create -f dashboard.yaml

dashboard.yaml 文件在此

 kubectl get pods --namespace=kube-system
[root@k8s-master ~]# kubectl get pods --namespace=kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-4rds2                1/1     Running   0          81m
coredns-86c58d9df4-rhtgq                1/1     Running   0          81m
etcd-k8s-master                         1/1     Running   0          80m
kube-apiserver-k8s-master               1/1     Running   0          80m
kube-controller-manager-k8s-master      1/1     Running   0          80m
kube-flannel-ds-amd64-8qzpx             1/1     Running   0          78m
kube-flannel-ds-amd64-jvp59             1/1     Running   0          77m
kube-flannel-ds-amd64-wztbk             1/1     Running   0          78m
kube-proxy-crr7k                        1/1     Running   0          81m
kube-proxy-gk5vf                        1/1     Running   0          78m
kube-proxy-ktr27                        1/1     Running   0          77m
kube-scheduler-k8s-master               1/1     Running   0          80m
kubernetes-dashboard-79ff88449c-v2jnc   1/1     Running   0          21s
kubectl get service --namespace=kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   5h38m
kubernetes-dashboard   NodePort    10.99.242.186   <none>        443:31234/TCP   14
openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr  [如遇输入,一路回车即可] 
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

dashboard-user-role.yaml文件中

 kubectl create -f dashboard-user-role.yaml

dashboard-user-role.yaml 文件在此

kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
[root@k8s-master ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
Name:         admin-token-9d4vl
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw

token 既然生成成功,接下来就可以打开浏览器,输入 token 来登录进集群管理页面:


后 记

由于能力有限,若有错误或者不当之处,还请大家批评指正,一起学习交流!



2774 次点击
所在节点    Blogger
6 条回复
0312birdzhang
2018-12-27 08:21:22 +08:00
前几天刚用 kubeadm 搭了一个测服,就是这个步骤
lestat
2018-12-27 08:29:32 +08:00
先收藏
yuedingwangji
2018-12-27 09:07:40 +08:00
之前我也简单搭建了下,有后续教程么,怎么玩?
br00k
2018-12-27 09:09:02 +08:00
我用 rancher,官方文档完善,部署方便。😂
ghos
2018-12-27 09:14:24 +08:00
rancher+1
yuedingwangji
2019-01-23 11:57:09 +08:00
大佬 ,docker 用的是 18.09 还是 18.06 呀

这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。

https://www.v2ex.com/t/521371

V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。

V2EX is a community of developers, designers and creative people.

© 2021 V2EX