CentOS7 安装K8S的两种方案( 三 )


Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.0.20.110:6443 --token bzy0jj.nkwctwqmroxh08zu
--discovery-token-ca-cert-hash sha256:6c0b3db060415f15c82b7f3c8948519d93708d60a46c1903c0bb11ac04ba17cf
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.20.110:6443 --token bzy0jj.nkwctwqmroxh08zu --discovery-token-ca-cert-hash sha256:6c0b3db060415f15c82b7f3c8948519d93708d60a46c1903c0bb11ac04ba17cf
意思是:
一要准备网络插件,否则node的状态不是ready的 。
二是可以通过kebeadm join 加入control-plane和worker node 。笔者只有一个control-plane,但是可以有多个worker node节点加入到集群 。
三是需要将admin.conf copy到本用户的配置目录,以便使用kubectl命令时候知道服务地址 。
su切换到root用户(其实我全程都是用root操作),并而执行:
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
验证一下get node命令:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster NotReady master 33h v1.18.6
NotReady是因为网络插件没有安装的原因 。
配置内部通信 flannel 网络(master和node都要配)
先配置内部通信 flannel 网络:
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
确保kubeadm.conf中的podsubnet的地址和kube-flannel.yml中的网络配置一样
加载配置文件:
# kubectl apply -f kube-flannel.yml
成功之后 kubectl get nodes的状态会变成ready 。
当然网络插件可以使用其他的,比如Weavenet插件,不过笔者试过是不太好用,还是flannel插件好用 。
Weavenet插件的安装:(不推荐)
# export kubever=$(kubectl version | base64 | tr -d 'n')
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
到此,k8s的master节点已经配置完毕,接下来是配置worker node节点 。
步骤七: 配置node节点k8s worker节点的配置步骤跟master的步骤一到五都是一样的,并而第六步的网络插件也一样 。
在worker节点上执行:
# kubeadm join 10.0.20.110:6443 --token bzy0jj.nkwctwqmroxh08zu --discovery-token-ca-cert-hash sha256:6c0b3db060415f15c82b7f3c8948519d93708d60a46c1903c0bb11ac04ba17cf
在master上执行:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 34h v1.18.6
k8snode1 Ready <none> 33h v1.18.6
可以看出两个节点的状态都已经ready了 。
使用docker ps -a | grep Up 命令分别在两个节点上运行,可以看出启动的k8s服务的pod:
在master上运行的pod有:kube-proxy,kube-scheduler,kube-controller-manager,kube-apiserver,etcd,flannel
在node上运行的pod有:kube-proxy,flannel
继续查看其他状态和数据:
# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
controller-manager和scheduler 的Unhealthy状态,后续再跟进 。估计是配置不正确导致的 。
# kubectl get namespace
NAME STATUS AGE
default Active 2d9h
kube-node-lease Active 2d9h
kube-public Active 2d9h
kube-system Active 2d9h
# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6"}
# kubectl api-versions
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
discovery.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1


推荐阅读