操作系统Centos7.6 node最低配置要求 CPU 2核 内存 2G
每一个节点分别修改hostname 比如master1这个节点: hostnamectl set-hostname master1
配置host绑定 vi /etc/hosts
192.168.33.11 master1
192.168.33.12 master2
192.168.33.13 master3
192.168.33.14 worker1
每个节点之间配置root用户免密钥登录
配置DNS vi /etc/resolv.conf
环境部署(所有节点) yum clean all yum -y update yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp ntpdate
setenforce 0
systemctl stop firewalld systemctl disable firewalld
禁用虚拟内存,禁止swap并禁用开机启动: swapoff -a sed -i ‘/swap/s/^(.*)$/#\1/g’ /etc/fstab
停止并禁用dnsmasq。dnsmasq是小型网络用于配置DNS和DHCP的工具 service dnsmasq stop systemctl disable dnsmasq
时间同步 timedatectl set-timezone Asia/Shanghai ntpdate cn.pool.ntp.org
cat > /etc/sysctl.d/kubernetes.conf «EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 EOF
sysctl -p /etc/sysctl.d/kubernetes.conf
如果报下面的报错: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录 sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录 可以执行一下: modprobe br_netfilter 然后再: sysctl -p /etc/sysctl.d/kubernetes.conf
安装docker(worker节点)
根据kubernetes对docker版本的兼容测试情况,我们选择18.09.9版本
由于近期docker官网速度极慢甚至无法访问,使用yum安装很难成功。我们直接使用rpm方式安装
手动下载rpm包
mkdir -p /opt/kubernetes/docker && cd /opt/kubernetes/docker
17.03.1版本 wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.2-3.el7.x86_64.rpm wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.1.ce-1.el7.centos.x86_64.rpm wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.1.ce-1.el7.centos.noarch.rpm
18.09.9版本 wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.2-3.el7.x86_64.rpm wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.09.9-3.el7.x86_64.rpm wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-18.09.9-3.el7.x86_64.rpm
清理原有版本 yum remove -y docker* container-selinux
安装rpm包 yum localinstall -y *.rpm
开机启动 systemctl enable docker
docker磁盘空间的配置 找一个磁盘最大的空间,我这里最大空间挂载在/opt目录下 mkdir -p /opt/docker-data mkdir -p /etc/docker vi /etc/docker/daemon.json
"graph": "/opt/docker-data"
service docker restart docker version
安装kubernetes
https://github.com/kubernetes/kubernetes/releases?after=v1.16.5-beta.1 https://pan.baidu.com/s/196cACgCQch8ff6ef7oXDUg
mkdir -p /opt/kubernetes/bin echo ‘PATH=/opt/kubernetes/bin:$PATH’ » ~/.bashrc
上一步我们下载了k8s各个组件的二进制文件,这些可执行文件的运行也是需要添加很多参数的,有的还会依赖一些配置文件。 需要我们提前把运行它们所需要的参数和配置文件都准备好。
git clone https://gitee.com/idownload/kubernetes-ha-binary.git
cfssl 工具 wget http://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget http://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 mv cfssl_linux-amd64 /usr/bin/cfssl mv cfssljson_linux-amd64 /usr/bin/cfssljson chmod +x /usr/bin/cfssl chmod +x /usr/bin/cfssljson
验证cfssl cfssl version
登录master创建目录 mkdir -p /etc/kubernetes/pki
cd kubernetes-ha-binary/pki cfssl gencert -initca ca-csr.json | cfssljson -bare ca
scp ca*.pem root@192.168.11.11:/etc/kubernetes/pki
下载安装部署etcd wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
cd kubernetes-ha-binary/pki/etcd cfssl gencert -ca=../ca.pem \ -ca-key=../ca-key.pem \ -config=../ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
cd kubernetes-ha-binary/pki/etcd cfssl gencert -ca=../ca.pem \ -ca-key=../ca-key.pem \ -config=../ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
scp etcd*.pem root@192.168.11.11:/etc/kubernetes/pki
scp kubernetes-ha-binary/services/etcd.service root@192.168.11.11:/etc/systemd/system/
登录master节点创建etcd的工作目录 mkdir -p /var/lib/etcd
启动etcd systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd
查看etcd的状态 service etcd status
查看etcd的日志 journalctl -f -u etcd
部署apiserver
配置yum源(有条件科学上网的,可以把”mirrors.aliyun.com”替换为”packages.cloud.google.com”) cat «EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum list kubeadm –showduplicates | sort -r |
#yum install -y kubeadm-1.18.2-0 kubelet-1.18.2-0 kubectl-1.18.2-0 –disableexcludes=kubernetes
#yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0 –disableexcludes=kubernetes
yum install -y kubeadm-1.16.4-0 kubelet-1.16.4-0 kubectl-1.16.4-0 –disableexcludes=kubernetes
kubelet版本是1.16.4,该版本支持的docker版本为1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09
开机启动kubelet
systemctl enable kubelet
配置master节点(仅master节点): 这里是单master: 生成配置文件 kubeadm config print init-defaults ClusterConfiguration > kubeadm.conf
修改配置文件,注意三个地方
kubernetesVersion: v1.18.2
localAPIEndpoint:
advertiseAddress: 10.0.2.51
bindPort: 6443
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
#networking:
# dnsDomain: cluster.local
# podSubnet: 10.244.0.0/16
# serviceSubnet: 10.96.0.0/12
#scheduler: {}
拉取镜像 kubeadm config images pull –config ./kubeadm.conf
检查镜像 docker images
初始化 kubeadm init –config ./kubeadm.conf
初始化成功的提示:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.33.12:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:366892d67eedc0a66f5e40535661cac8df59ab321e15bb3b0da727b1a4f64de5
这里面有这一段:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
把这三行命令照着执行一下。加载环境变量。 你也可以用其他办法比如:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
然后最后面还有一段
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.33.12:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:366892d67eedc0a66f5e40535661cac8df59ab321e15bb3b0da727b1a4f64de5
这个要记下来,后面加worker节点会用到。
如果需要重新初始化,可以用下面这个命令重置 kubeadm reset rm -rf $HOME/.kube/config 重置命令要慎用。如果要重置,最好把所有节点都重置,然后重启服务器。再执行kubeadm init
初始化完成了,我们启动kubelet systemctl enable kubelet && systemctl start kubelet
systemctl status kubelet
journalctl -xefu kubelet 查看详细信息
[root@k8snode1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8snode1 NotReady master 10m v1.14.0
[root@k8snode1 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {“health”:”true”}
获取并上传kube-flannel.yml
kubectl apply -f kube-flannel.yml
稍等一会,就可以检查一下master的网络状态,可以看到已经状态已经变成Ready [root@node2 opt]# kubectl get node NAME STATUS ROLES AGE VERSION node2 Ready master 62m v1.14.0
如果是master集群,配置文件就不一样了。 先在master1节点上执行: [root@localhost ~]# cat kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta1 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.18.2 apiServer: certSANs:
- master1
- master2
- master3
- worker1
- worker2
- 192.168.33.11
- 192.168.33.12
- 192.168.33.13
- 192.168.33.14
- 192.168.33.15 controlPlaneEndpoint: “192.168.33.11:6443” networking: dnsDomain: cluster.local podSubnet: “10.244.0.0/16” serviceSubnet: 10.96.0.0/12 scheduler: {}
master集群初始化命令: kubeadm init –config=kubeadm.yaml
初始化成功以后会类似如下的提示:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.33.11:6443 --token 0a0mf9.fc10msqpsk1lll8h \
--discovery-token-ca-cert-hash sha256:c913cd2c5fa9c47c177c3016e0addea2d9d78e64cb28b3c75b764ad7d1c52676 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.33.11:6443 --token 0a0mf9.fc10msqpsk1lll8h \
--discovery-token-ca-cert-hash sha256:c913cd2c5fa9c47c177c3016e0addea2d9d78e64cb28b3c75b764ad7d1c52676
环境变量: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
systemctl restart kubelet
获取 kube-flannel.yml
kubectl apply -f kube-flannel.yml
稍等一会,就可以检查一下master的网络状态,可以看到已经状态已经变成Ready [root@node2 opt]# kubectl get node NAME STATUS ROLES AGE VERSION node2 Ready master 62m v1.14.0
其他节点和master之间做好密钥登录 ssh-keygen -t rsa
配置第二个master节点,我们需要从第一个master节点复制一些配置文件到第二个master节点上 远程登录第二个master节点 执行命令: scp -r root@master1:/etc/kubernetes/pki . scp -r root@master1:/etc/kubernetes/admin.conf .
删除多余文件: rm -f pki/apiserver* rm -f pki/front-proxy-client.* rm -f pki/etcd/healthcheck-client.* pki/etcd/peer.* pki/etcd/server.* 复制到本地目录: cp -rf pki /etc/kubernetes/ cp -rf admin.conf /etc/kubernetes/
#scp -r root@master1:~/kubeadm.yaml .
环境变量: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
加入集群: kubeadm join 192.168.33.11:6443 –token 0a0mf9.fc10msqpsk1lll8h \ –discovery-token-ca-cert-hash sha256:c913cd2c5fa9c47c177c3016e0addea2d9d78e64cb28b3c75b764ad7d1c52676 \ –control-plane
好了,下面是部署worker节点,node3:
scp /etc/kubernetes/admin.conf root@node3:~
远程登录node3,执行:
mkdir -p $HOME/.kube
sudo cp -i $HOME/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
node3 加入master集群: kubeadm join 192.168.33.12:6443 –token abcdef.0123456789abcdef \ –discovery-token-ca-cert-hash sha256:366892d67eedc0a66f5e40535661cac8df59ab321e15bb3b0da727b1a4f64de5
上master把文件传到node3 scp kube-flannel.yml root@node3:/opt
上node2,启动flannel网络: kubectl apply -f kube-flannel.yml
过一会,检查node3是否加入集群: kubectl get nodes
集群节点状态查看: kubectl get po -o wide -n kube-system
https://www.kubernetes.org.cn/6632.html https://blog.csdn.net/qq_36160277/article/details/97989131 https://kuboard.cn/install/install-k8s.html#%E5%88%9D%E5%A7%8B%E5%8C%96-master-%E8%8A%82%E7%82%B9
kubernetes
![]() |
![]() |