k8s单节点和多节点部署

k8s单节点部署

参考文档

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundationalhttps://github.com/etcd-io/etcdhttps://shengbao.org/348.htmlhttps://github.com/coreos/flannelhttp://www.cnblogs.com/blogscc/p/10105134.htmlhttps://blog.csdn.net/xiegh2014/article/details/84830880https://blog.csdn.net/tiger435/article/details/85002337https://www.cnblogs.com/wjoyxt/p/9968491.htmlhttps://blog.csdn.net/zhaihaifei/article/details/79098564http://blog.51cto.com/jerrymin/1898243http://www.cnblogs.com/xuxinkun/p/5696031.html1.环境规划软件版本

linuxcentos7.4kubernetes1.14docker18etcd3.3

角色IP组件

master172.16.1.43kube-apiserver kube-controller-manager kube-scheduler etcdnode1172.16.1.44kubelet kube-proxy docker flannel etcdnode2172.16.1.45kubelet kube-proxy docker flannel etcd

2.安装docker

在两个node节点上安装docker

yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repoyum list docker-ce –showduplicates | sort -ryum install docker-ce -ysystemctl start docker && systemctl enable docker

添加国内的镜像源

vim /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"]}3.自签TLS证书

在这里etcd和kubernetes使用的是同一个ca和server证书

可以分别创建,使用各自的证书

组件使用的证书

etcdca.pem server.pem server-key.pemkube-apiserverca.pem server.pem server-key.pemkubeletca.pem ca-key.pemkube-proxyca.pem kube-proxy.pem kube-proxy-key.pemkubectlca.pem admin.pem admin-key.pemflannelca.pem server.pem server-key.pem

1) 安装证书生成工具cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2)创建目录/opt/tools/cfssl,所有的证书都在这里创建

mkdir /opt/tools/cfssl3.1 创建etcd证书

1) ca配置

cat > ca-config.json <<EOF{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } }}EOF

2)ca证书请求文件

cat > ca-csr.json <<EOF{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "K8s", "OU": "System" } ]}EOF

3)生产ca证书和私钥,初始化ca

cfssl gencert -initca ca-csr.json | cfssljson -bare ca –

查看ca证书

[root@k8s-master cfssl]# lsca-config.json ca.csr ca-csr.json ca-key.pem ca.pem3.2 创建server证书

1)server证书请求文件

cat > server-csr.json <<EOF{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "172.16.1.43", "172.16.1.44", "172.16.1.45", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "K8s", "OU": "System" } ]}EOF

2) 生成server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

查看server证书

[root@k8s-master cfssl]# lsca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem3.3 创建kube-proxy证书

1)kube-proxy证书请求文件

cat > kube-proxy-csr.json <<EOF{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "K8s", "OU": "System" } ]}EOF

2)生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy创建客户端证书

1)admin证书请求文件

cat > admin-csr.json <<EOF{ "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:masters", "OU": "System" } ]}EOF

2)生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin4.安装etcd

三台都需要安装,实现高可用

1)下载etcd

wget https://github.com/etcd-io/etcd/releases/download/v3.3.12/etcd-v3.3.12-linux-amd64.tar.gztar xf etcd-v3.3.12-linux-amd64cp etcd etcdctl /opt/kubernetes/bin/

2)将kubernetes的bin目录加入环境变量,方便以后的使用

3)编辑配置文件

vim /opt/kubernetes/cfg/etcd

#[Member]ETCD_NAME="etcd01"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://172.16.1.43:2380"ETCD_LISTEN_CLIENT_URLS="https://172.16.1.43:2379" #[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.43:2380"ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.43:2379"ETCD_INITIAL_CLUSTER="etcd01=https://172.16.1.43:2380,etcd02=https://172.16.1.44:2380,etcd03=https://172.16.1.45:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"

4)编辑启动脚本

vim /usr/lib/systemd/system/etcd.service

[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=-/opt/kubernetes/cfg/etcd# set GOMAXPROCS to number of processorsExecStart=/opt/kubernetes/bin/etcd \–name=${ETCD_NAME} \–data-dir=${ETCD_DATA_DIR} \–listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \–listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \–advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \–initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \–initial-cluster=${ETCD_INITIAL_CLUSTER} \–initial-cluster-state=new \–cert-file=/opt/kubernetes/ssl/server.pem \–key-file=/opt/kubernetes/ssl/server-key.pem \–peer-cert-file=/opt/kubernetes/ssl/server.pem \–peer-key-file=/opt/kubernetes/ssl/server-key.pem \–trusted-ca-file=/opt/kubernetes/ssl/ca.pem \–peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem#–peer-client-cert-auth=\&;${ETCD_PEER_CLIENT_CERT_AUTH}\#–client-cert-auth=\&;${ETCD_CLIENT_CERT_AUTH}\&; \Restart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target

5)启动etcd,第一个节点启动时,会卡住,因为连接不到其他的节点,按ctrl + c 退出即可,已经启动完成。

systemctl start etcdsystemctl enable etcd

6)检查etcd集群状态

/opt/kubernetes/bin/etcdctl –ca-file=/opt/kubernetes/ssl/ca.pem –cert-file=/opt/kubernetes/ssl/server.pem –key-file=/opt/kubernetes/ssl/server-key.pem –endpoints="https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379" cluster-healthmember 2389474cc6fd9d08 is healthy: got healthy result from https://172.16.1.45:2379member 5662fbe4b66bbe16 is healthy: got healthy result from https://172.16.1.44:2379member 9f7ff9ac177a0ffb is healthy: got healthy result from https://172.16.1.43:2379cluster is healthy5.部署flannel网络

在两台node节点上部署flannel网络

默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,为了部署步骤简洁明了,故flanneld放在后面安装 。

flannel服务需要先于docker启动。flannel服务启动时主要做了以下几步的工作: 从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录到/run/flannel/subnet.env中

###5.1 etcd注册网段

1) 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

etcdctl –ca-file=/opt/kubernetes/ssl/ca.pem –cert-file=/opt/kubernetes/ssl/server.pem –key-file=/opt/kubernetes/ssl/server-key.pem –endpoints="https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379" set /coreos.com/network/config ‘{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}’

2)检查是否注册成功

etcdctl –ca-file=/opt/kubernetes/ssl/ca.pem –cert-file=/opt/kubernetes/ssl/server.pem –key-file=/opt/kubernetes/ssl/server-key.pem –endpoints="https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379" get /coreos.com/network/config5.2 flannel安装

1) 下载解压安装

https://github.com/coreos/flannel/releasestar xf flannel-v0.11.0-linux-amd64.tar.gzmv flanneld mk-docker-opts.sh /opt/kubernetes/bin/

2) 编辑flannel配置文件

vim /opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="–etcd-endpoints=https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

3)编辑flannel启动脚本

vim /usr/lib/systemd/system/flanneld.service

[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service [Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld –ip-masq $FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure [Install]WantedBy=multi-user.target

注意:

mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入/run/flannel/subnet.env 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥; flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口; flanneld 运行时需要 root 权限;

4) 修改docker的启动脚本

配置Docker启动指定子网 修改EnvironmentFile=/run/flannel/subnet.env,ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS即可

vim /usr/lib/systemd/system/docker.service

[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target [Service]Type=notifyEnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill -s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s [Install]WantedBy=multi-user.target

5)启动服务

注意启动flannel前要关闭docker及相关的kubelet这样flannel才会覆盖docker0网桥

systemctl daemon-reloadsystemctl stop dockersystemctl start flanneldsystemctl enable flanneldsystemctl start docker

6)验证服务

cat /run/flannel/subnet.envip a

##6.创建node节点kubeconfig文件

在生成证书的目录下进行

cd /opt/tools/cfssl

6.1 创建TLS Bootstrapping Tokenexport BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘)cat > token.csv <<EOF${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"EOF6.2 创建kubelet kubeconfigKUBE_APISERVER="https://172.16.1.43:6443"# 设置集群参数kubectl config set-cluster kubernetes \ –certificate-authority=ca.pem \ –embed-certs=true \ –server=${KUBE_APISERVER} \ –kubeconfig=bootstrap.kubeconfig#设置客户端认证参数kubectl config set-credentials kubelet-bootstrap \ –token=${BOOTSTRAP_TOKEN} \ –kubeconfig=bootstrap.kubeconfig# 设置上下文参数kubectl config set-context default \ –cluster=kubernetes \ –user=kubelet-bootstrap \ –kubeconfig=bootstrap.kubeconfig# 设置默认上下文kubectl config use-context default –kubeconfig=bootstrap.kubeconfig6.3 创建kube-proxy kubeconfig

kubectl命令在kubernetes-node安装包中

kubectl config set-cluster kubernetes \ –certificate-authority=ca.pem \ –embed-certs=true \ –server=${KUBE_APISERVER} \ –kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ –client-certificate=kube-proxy.pem \ –client-key=kube-proxy-key.pem \ –embed-certs=true \ –kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ –cluster=kubernetes \ –user=kube-proxy \ –kubeconfig=kube-proxy.kubeconfig kubectl config use-context default –kubeconfig=kube-proxy.kubeconfig

完成以后会创建两个配置文件bootstrap.kubeconfig和kube-proxy.kubeconfig,部署node节点时会用到这两个文件

7. 部署master

在master节点安装

kubernetes master 节点运行如下组件: kube-apiserver kube-scheduler kube-controller-manager kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式,master三节点高可用模式下可用

下载安装包,将二进制文件放到指定位置

https://github.com/kubernetes/kubernetes/releasestar xf kubernetes-server-linux-amd64.tar.gzcd kubernetes/server/bin/mv kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/7.1安装kube-apiserver

1) 创建apiserver配置文件

vim /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="–logtostderr=true \–v=4 \–etcd-servers=https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379 \–insecure-bind-address=127.0.0.1 \–bind-address=172.16.1.43 \–insecure-port=8080 \–secure-port=6443 \–advertise-address=172.16.1.43 \–allow-privileged=true \–service-cluster-ip-range=10.10.10.0/24 \–enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \–authorization-mode=RBAC,Node \–kubelet-https=true \–enable-bootstrap-token-auth \–token-auth-file=/opt/kubernetes/cfg/token.csv \–service-node-port-range=30000-50000 \–tls-cert-file=/opt/kubernetes/ssl/server.pem \–tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \–client-ca-file=/opt/kubernetes/ssl/ca.pem \–service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \–etcd-cafile=/opt/kubernetes/ssl/ca.pem \–etcd-certfile=/opt/kubernetes/ssl/server.pem \–etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

2) 创建apiserver启动脚本

vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes [Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserverExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure [Install]WantedBy=multi-user.target

3) 启动apiserver

systemctl daemon-reloadsystemctl enable kube-apiserversystemctl start kube-apiserver[root@k8s-master cfg]# ss -tnlp|grep kube-apiserverLISTEN 0 16384 172.16.1.43:6443 *:* users:(("kube-apiserver",pid=5487,fd=5))LISTEN 0 16384 127.0.0.1:8080 *:* users:(("kube-apiserver",pid=5487,fd=3))7.2 安装kube-scheduler

1) 创建配置文件

vim /opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="–logtostderr=true –v=4 –master=127.0.0.1:8080 –leader-elect"

?–address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;

–kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;

–leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

2)创建scheduler启动脚本

vim /usr/lib/systemd/system/kube-scheduler.service

[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes [Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-schedulerExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failure [Install]WantedBy=multi-user.target

3) 启动scheduler/opt/kubernetes/cfg/kube-controller-manager

systemctl daemon-reloadsystemctl enable kube-scheduler.service systemctl start kube-scheduler.service7.3 安装kube-controller-manager

1) 创建配置文件

vim /opt/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="–logtostderr=true \–v=4 \–master=127.0.0.1:8080 \–leader-elect=true \–address=127.0.0.1 \–service-cluster-ip-range=10.10.10.0/24 \–cluster-name=kubernetes \–cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \–cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \–root-ca-file=/opt/kubernetes/ssl/ca.pem \–service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

2)创建启动脚本

vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes [Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-managerExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure [Install]WantedBy=multi-user.target

3 ) 启动服务

systemctl daemon-reloadsystemctl enable kube-controller-managersystemctl start kube-controller-manager

安装完成,检查master服务状态

[root@k8s-master cfg]# kubectl get cs,nodesNAME STATUS MESSAGE ERRORcomponentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"} 8.部署node

在所有node节点上

kubernetes work 节点运行如下组件: docker kubelet kube-proxy flannel

下载安装包

tar zxvf kubernetes-node-linux-amd64.tar.gzcd kubernetes/node/bin/cp kube-proxy kubelet /opt/kubernetes/bin/

从master拷贝bootstrap.kubeconfig和kube-proxy.kubeconfig配置文件到所有node节点

从master拷贝相关的证书到所有node节点

8.1 安装kubelet

kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等; kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况; 为确保安全,只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)

1)创建bubelet配置文件

vim /opt/kubernetes/cfg/kubelet

KUBELET_OPTS="–logtostderr=true \–v=4 \–address=172.16.1.44 \–hostname-override=172.16.1.44 \–kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \–experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \–cert-dir=/opt/kubernetes/ssl \–allow-privileged=true \–cluster-dns=10.10.10.2 \–cluster-domain=cluster.local \–fail-swap-on=false \–pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

2)创建kubelet启动脚本

vim /usr/lib/systemd/system/kubelet.service

[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service [Service]EnvironmentFile=/opt/kubernetes/cfg/kubeletExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTSRestart=on-failureKillMode=process [Install]WantedBy=multi-user.target

3)将kubelet-bootstrap用户绑定到系统集群角色,否则,启动时会报错“kubelet-bootstrap用户没有权限创建证书”

需要在master节点执行,默认连接localhost:8080端口

kubectl create clusterrolebinding kubelet-bootstrap \ –clusterrole=system:node-bootstrapper \ –user=kubelet-bootstrap

4)启动bubelet

systemctl enable kubeletsystemctl start kubelet

5)master接受kubelet CSR请求 可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书,如下是手动 approve CSR请求操作方法 查看CSR列表

[root@k8s-master cfssl]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-biBRRPvmJcLrXmUh1WNlStlEzc_BctF8fymNjOl4Wms 2m kubelet-bootstrap Pending

接受node

kubectl certificate approve node-csr-biBRRPvmJcLrXmUh1WNlStlEzc_BctF8fymNjOl4Wms

再查看CSR

[root@k8s-master cfssl]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-biBRRPvmJcLrXmUh1WNlStlEzc_BctF8fymNjOl4Wms 2m kubelet-bootstrap Approved,Issued

查看集群node状态

[root@k8s-master cfssl]# kubectl get nodeNAME STATUS ROLES AGE VERSION172.16.1.44 Ready <none> 138m v1.13.08.2 安装kube-proxy

kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡

1)创建kube-proxy配置文件

vim /opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="–logtostderr=true \–v=4 \–hostname-override=172.16.1.44 \–kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

2) 创建kube-proxy启动脚本

vim /usr/lib/systemd/system/kube-proxy.service

[Unit]Description=Kubernetes ProxyAfter=network.target [Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTSRestart=on-failure [Install]WantedBy=multi-user.target

3)启动buke-proxy

systemctl enable kube-proxysystemctl start kube-proxy

加入集群后,会在cfg目录下生成配置文件kubelet的配置文件和证书

[root@k8s-node1 cfg]# ls /opt/kubernetes/cfg/kubelet.kubeconfig /opt/kubernetes/cfg/kubelet.kubeconfig[root@k8s-node1 cfg]# ls /opt/kubernetes/ssl/kubelet*/opt/kubernetes/ssl/kubelet-client-2019-03-30-11-49-33.pem /opt/kubernetes/ssl/kubelet.crt/opt/kubernetes/ssl/kubelet-client-current.pem /opt/kubernetes/ssl/kubelet.key

注意期间要是kubelet,kube-proxy配置错误,比如监听IP或者hostname错误导致node not found,需要删除kubelet-client证书,重启kubelet服务,重启认证csr即可

9.kubectl管理工具

在客户端配置kubectl工具,进行集群管理

1) 将kubectl工具拷贝到客户端

scp kubectl 172.16.1.44:/usr/bin/

2) 将之前创建的admin证书和ca证书拷贝到客户端

scp admin*pem ca.pem 172.16.1.44:/root/kubernetes/

3)设置集群项中名为kubernetes的apiserver地址与根证书

kubectl config set-cluster kubernetes –server=https://172.16.1.43:6443 –certificate-authority=kubernetes/ca.pem

会生成一个配置文件/root/.kube/config

cat .kube/config apiVersion: v1clusters:- cluster: certificate-authority: /root/kubernetes/ca.pem server: https://172.16.1.43:6443 name: kubernetescontexts: []current-context: ""kind: Configpreferences: {}users: []

4)设置用户项中cluster-admin用户证书认证字段

kubectl config set-credentials cluster-admin –certificate-authority=kubernetes/ca.pem –client-key=kubernetes/admin-key.pem –client-certificate=kubernetes/admin.pem

将admin用户信息添加到了/root/.kube/config文件里

cat .kube/config apiVersion: v1clusters:- cluster: certificate-authority: /root/kubernetes/ca.pem server: https://172.16.1.43:6443 name: kubernetescontexts: []current-context: ""kind: Configpreferences: {}users:- name: cluster-admin user: client-certificate: /root/kubernetes/admin.pem client-key: /root/kubernetes/admin-key.pem

5)设置环境项中名为default的默认集群和用户

kubectl config set-context default –cluster=kubernetes –user=cluster-admin

将默认的上下文信息添加到配置文件里/root/.kube/config

cat .kube/config apiVersion: v1clusters:- cluster: certificate-authority: /root/kubernetes/ca.pem server: https://172.16.1.43:6443 name: kubernetescontexts:- context: cluster: kubernetes user: cluster-admin name: defaultcurrent-context: ""kind: Configpreferences: {}users:- name: cluster-admin user: client-certificate: /root/kubernetes/admin.pem client-key: /root/kubernetes/admin-key.pem

6)设置默认环境项为default

kubectl config use-context default

完整的客户端配置文件

.kube/config apiVersion: v1clusters:- cluster: certificate-authority: /root/kubernetes/ca.pem server: https://172.16.1.43:6443 name: kubernetescontexts:- context: cluster: kubernetes user: cluster-admin name: defaultcurrent-context: defaultkind: Configpreferences: {}users:- name: cluster-admin user: client-certificate: /root/kubernetes/admin.pem client-key: /root/kubernetes/admin-key.pem

7)测试,在客户端使用kubectl连接集群

[root@k8s-node1 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSION172.16.1.44 Ready <none> 6h30m v1.13.0172.16.1.45 Ready <none> 6h10m v1.13.0

将客户端的证书和配置文件打包,放到其他客户端同样可以使用

10. 安装coreDNS

在安装kubelet时,指定了dns地址是10.10.10.2,但是我们还没有安装dns组件,所有现在创建的pod不能进行正常的域名解析,需要安装dns组件

kubenetes1.13开始默认使用coreDNS来代替kube-dns

1)生成coredns.yml文件

coredns.yaml的模板文件:

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

# __MACHINE_GENERATED_WARNING__apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile—apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:corednsrules:- apiGroups: – "" resources: – endpoints – services – pods – namespaces verbs: – list – watch- apiGroups: – "" resources: – nodes verbs: – get—apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system—apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExistsdata: Corefile: | .:53 { errors health kubernetes __PILLAR__DNS__DOMAIN__ in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance }—apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: seccomp.security.alpha.kubernetes.io/pod: ‘docker/default’ spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: – key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: beta.kubernetes.io/os: linux containers: – name: coredns image: k8s.gcr.io/coredns:1.3.1 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: – name: config-volume mountPath: /etc/coredns readOnly: true ports: – containerPort: 53 name: dns protocol: UDP – containerPort: 53 name: dns-tcp protocol: TCP – containerPort: 9153 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /health port: 8080 scheme: HTTP securityContext: allowPrivilegeEscalation: false capabilities: add: – NET_BIND_SERVICE drop: – all readOnlyRootFilesystem: true dnsPolicy: Default volumes: – name: config-volume configMap: name: coredns items: – key: Corefile path: Corefile—apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: __PILLAR__DNS__SERVER__ ports: – name: dns port: 53 protocol: UDP – name: dns-tcp port: 53 protocol: TCP – name: metrics port: 9153 protocol: TCP

transforms2sed.sed的文件:

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/transforms2sed.sed

s/__PILLAR__DNS__SERVER__/$DNS_SERVER_IP/gs/__PILLAR__DNS__DOMAIN__/$DNS_DOMAIN/gs/__PILLAR__CLUSTER_CIDR__/$SERVICE_CLUSTER_IP_RANGE/gs/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

使用sed命令替换模板文件中的指定的字段, $DNS_SERVER_IP集群中指定的dns地址10.10.10.2,$DNS_DOMAIN集群中指定的根域名cluster.local,$SERVICE_CLUSTER_IP_RANGE 集群中指定IP段10.10.10.0/24

还需要在配置文件中添加paiserver的地址 endpoint http://172.16.1.43:8080,默认coreDNS会连接10.10.10.1:443,是访问不通的

apiVersion: v1kind: ConfigMap…….:53 {errorshealthkubernetes cluster.local in-addr.arpa ip6.arpa {endpoint http://172.16.1.43:8080……}

生成新的配置文件

sed -f transforms2sed.sed coredns.yaml.base > coredns.yaml

完成的配置文件coredns.yaml

# Warning: This is a file generated from the base underscore template file: coredns.yaml.baseapiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile—apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:corednsrules:- apiGroups: – "" resources: – endpoints – services – pods – namespaces verbs: – list – watch- apiGroups: – "" resources: – nodes verbs: – get—apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system—apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExistsdata: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { endpoint http://172.16.1.43:8080 pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance }—apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: seccomp.security.alpha.kubernetes.io/pod: ‘docker/default’ spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: – key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: beta.kubernetes.io/os: linux containers: – name: coredns image: coredns/coredns:1.3.1 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: – name: config-volume mountPath: /etc/coredns readOnly: true ports: – containerPort: 53 name: dns protocol: UDP – containerPort: 53 name: dns-tcp protocol: TCP – containerPort: 9153 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /health port: 8080 scheme: HTTP securityContext: allowPrivilegeEscalation: false capabilities: add: – NET_BIND_SERVICE drop: – all readOnlyRootFilesystem: true dnsPolicy: Default volumes: – name: config-volume configMap: name: coredns items: – key: Corefile path: Corefile—apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.10.10.2 ports: – name: dns port: 53 protocol: UDP – name: dns-tcp port: 53 protocol: TCP – name: metrics port: 9153 protocol: TCP

官网的文档https://coredns.io/plugins/kubernetes/

resyncperiod|用于从kubernetes的api同步数据的时间间隔

endpoint |?指定kubernetes的api地址,coredns会自动对其执行健康检查并将请求代理到健康的节点上

tls| 用于指定连接远程kubernetes api的相关证书

pods| 指定POD-MODE,有以下三种:

disabled:默认insecure:返回一个A记录对应的ip,但并不会检查这个ip对应的Pod当前是否存在。这个选项主要用于兼容kube-dnsverified:推荐的方式,返回A记录的同时会确保对应ip的pod存在。比insecure会消耗更多的内存。

upstream| 定义外部域名解析转发的地址,可以是一个ip地址,也可以是一个resolv.conf文件

ttl| 默认5s,最大3600s

errors| 错误会被记录到标准输出

health| 用于检测当前配置是否存活,默认监听http 8080端口,可配置

kubernetes|根据服务的IP响应DNS查询请求。

prometheus|可以通过http://localhost:9153/metrics获取prometheus格式的监控数据

proxy|本地无法解析后,向上级地址进行查询,默认使用宿主机的 /etc/resolv.conf 配置

cache| 用于在内存中缓存dns解析,单位为s

reload| 单位为s,如果配置文件发生变更,自动reload的间隔

.:53 { kubernetes wh01 { resyncperiod 10s endpoint https://10.1.61.175:6443 tls admin.pem admin-key.pem ca.pem pods verified endpoint_pod_names upstream /etc/resolv.conf } health log /var/log/coredns.log prometheus :9153 proxy . /etc/resolv.conf cache 30 reload 10s}

2)部署coreDNS

kubectl create -f coredns.yaml

3)查看状态

kubectl get all -o wide -n kube-systemNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/coredns-69b995478c-vs46g 1/1 Running 0 50m 172.17.16.2 172.16.1.44 <none> <none>pod/kubernetes-dashboard-9bb654ff4-4zmn8 1/1 Running 0 12h 172.17.66.5 172.16.1.45 <none> <none>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORservice/kube-dns ClusterIP 10.10.10.2 <none> 53/UDP,53/TCP,9153/TCP 50m k8s-app=kube-dnsservice/kubernetes-dashboard NodePort 10.10.10.191 <none> 81:45236/TCP 4d17h app=kubernetes-dashboardNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORdeployment.apps/coredns 1/1 1 1 50m coredns coredns/coredns:1.3.1 k8s-app=kube-dnsdeployment.apps/kubernetes-dashboard 1/1 1 1 4d17h kubernetes-dashboard registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 app=kubernetes-dashboardNAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTORreplicaset.apps/coredns-69b995478c 1 1 1 50m coredns coredns/coredns:1.3.1 k8s-app=kube-dns,pod-template-hash=69b995478creplicaset.apps/kubernetes-dashboard-9bb654ff4 1 1 1 4d17h kubernetes-dashboard registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 app=kubernetes-dashboard,pod-template-hash=9bb654ff4

4)测试

busybox中的nslookup命令有问题,测试成功,但是返回了错误的信息

使用带nslookup的alphine测试

kubectl run dig –rm -it –image=docker.io/azukiapp/dig /bin/sh———-/ # cat /etc/resolv.conf nameserver 10.10.10.2search default.svc.cluster.local svc.cluster.local cluster.localoptions ndots:5/ # dig kubernetes.default.svc.cluster.local; <<>> DiG 9.10.3-P3 <<>> kubernetes.default.svc.cluster.local;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13605;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; WARNING: recursion requested but not available;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;kubernetes.default.svc.cluster.local. IN A;; ANSWER SECTION:kubernetes.default.svc.cluster.local. 5IN A10.10.10.1;; Query time: 1 msec;; SERVER: 10.10.10.2#53(10.10.10.2);; WHEN: Thu Apr 04 02:43:18 UTC 2019;; MSG SIZE rcvd: 117/ # dig nginx-service.default.svc.cluster.local; <<>> DiG 9.10.3-P3 <<>> nginx-service.default.svc.cluster.local;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24013;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; WARNING: recursion requested but not available;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;nginx-service.default.svc.cluster.local. IN A;; ANSWER SECTION:nginx-service.default.svc.cluster.local. 5 IN A10.10.10.176;; Query time: 0 msec;; SERVER: 10.10.10.2#53(10.10.10.2);; WHEN: Thu Apr 04 02:43:29 UTC 2019;; MSG SIZE rcvd: 123/ # dig www.baidu.com; <<>> DiG 9.10.3-P3 <<>> www.baidu.com;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28619;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 5, ADDITIONAL: 2;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;www.baidu.com.INA;; ANSWER SECTION:www.baidu.com.30INCNAMEwww.a.shifen.com.www.a.shifen.com.30INA119.75.217.26www.a.shifen.com.30INA119.75.217.109;; AUTHORITY SECTION:a.shifen.com.30INNSns4.a.shifen.com.a.shifen.com.30INNSns5.a.shifen.com.a.shifen.com.30INNSns3.a.shifen.com.a.shifen.com.30INNSns2.a.shifen.com.a.shifen.com.30INNSns1.a.shifen.com.;; ADDITIONAL SECTION:ns4.a.shifen.com.30INA14.215.177.229;; Query time: 3 msec;; SERVER: 10.10.10.2#53(10.10.10.2);; WHEN: Thu Apr 04 02:43:41 UTC 2019;; MSG SIZE rcvd: 39111. master节点高可用11.1 添加master2节点

1) 在mater1节点上的server.pm证书文件中加入master2节点的ip 172.16.1.46地址,并重新生成server.pm证书

cd /opt/tools/cfsslvim server-csr.json{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "10.10.10.1", "172.16.1.43", "172.16.1.44", "172.16.1.45", "172.16.1.46", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "K8s", "OU": "System" } ]}

2)重新生成server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

3)将server.pm和server-key.pm文件拷贝到master和node节点的ssl目录,重启相关服务

4)将master1上的/opt/kubernetes目录和apiserver、scheduler、controller-manager启动脚本拷贝到master2上

scp -r /opt/kubernetes 172.16.1.46:/opt/scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service 172.16.1.46:/usr/lib/systemd/system/scp /etc/profile.d/kubernetes.sh 172.16.1.46:/etc/profile.d/

5) 在master2上修改kube-apiserver的配置文件,将监听的IP改为master2的

cat kube-apiserver KUBE_APISERVER_OPTS="–logtostderr=true \–v=4 \–etcd-servers=https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379 \–insecure-bind-address=0.0.0.0 \–bind-address=172.16.1.46 \–insecure-port=8080 \–secure-port=6443 \–advertise-address=172.16.1.46 \–allow-privileged=true \–service-cluster-ip-range=10.10.10.0/24 \–enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \–authorization-mode=RBAC,Node \–kubelet-https=true \–enable-bootstrap-token-auth \–token-auth-file=/opt/kubernetes/cfg/token.csv \–service-node-port-range=30000-50000 \–tls-cert-file=/opt/kubernetes/ssl/server.pem \–tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \–client-ca-file=/opt/kubernetes/ssl/ca.pem \–service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \–etcd-cafile=/opt/kubernetes/ssl/ca.pem \–etcd-certfile=/opt/kubernetes/ssl/server.pem \–etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

6)在master2上启动服务

systemctl start kube-apiserversystemctl start kube-schedulersystemctl start kube-controller-manager

查看日志,有无报错

7)在master2上测试一下

kubectl get node NAME STATUS ROLES AGE VERSION172.16.1.44 Ready <none> 10d v1.13.0172.16.1.45 Ready <none> 10d v1.13.011.2 在node节点安装nginx代理

在所有node节点安装nginx,监听本机的6443端口,转发到后端的apiserver的6443端口,将node节点的kubelet连接的apiserver地址改为127.0.0.1:6443,实现master节点高可用

1)安装nginx

yum install nginx -y

2)修改nginx配置文件

vim nginx.confuser nginx nginx;worker_processes 8; pid /usr/local/nginx/logs/nginx.pid;worker_rlimit_nofile 51200;events{ use epoll; worker_connections 65535;} stream {upstream k8s-apiserver {server 172.16.1.43:6443;server 172.16.1.46:6443;}server {listen 127.0.0.1:6443;proxy_pass k8s-apiserver;}}

3)启动nginx

/etc/init.d/nginx startchkconfig nginx on

4)修改node节点所有组件中apiserver地址

cd /opt/kubernetes/cfgls *config |xargs -i sed -i ‘s/172.16.1.43/127.0.0.1/’ {}

5)重启kubelet和kube-proxy

systemctl restart kubeletsystemctl restart kube-proxy

查看日志,有无报错

6)在master节点查看,node节点的状态

kubectl get nodeNAME STATUS ROLES AGE VERSION172.16.1.44 Ready <none> 10d v1.13.0172.16.1.45 Ready <none> 10d v1.13.0

正常,到此,master的高可用已经完成

【文章转自 响水网站设计 xiangshui.html 欢迎留下您的宝贵建议】别人失去了信心,他却下决心实现自己的目标。

k8s单节点和多节点部署

相关文章:

你感兴趣的文章:

标签云: