环境准备

准备两台机器:master,node

ip: master:192.168.13.129

node1:192.168.1.130

环境:centos7

注意事项:

docker版本最高支持18.06,高于要此版本报错

kubernetes12.2+docker-ce18.06.1ce

环境配置

在master和node 端执行:

1:安全策略规则配置

1
2
3
4
5
6
7
8
9
10
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
iptables -F
iptables -t nat -F
iptables -I FORWARD -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT
yum -y install ntp
ntpdate pool.ntp.org
systemctl start ntpd
systemctl enable ntpd

2:内核设置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 关闭selinux
vim /etc/sysconfig/selinux
SELINUX=disable
# 修改内核参数
$vim /etc/sysctl.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
vm.swappiness=0
# 关闭swap
swapoff -a
# 注释自动挂载
vim /etc/fstab
# 关闭selinux
vim /etc/selinux/config
# 保存修改内核参数
sysctl -p
# 确保以下两个文件里面显示值为1:
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
# 每个节点都修改下面值,
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="fail-swap-on=false"

3:域名解析,免密登录,时间同步

a: 域名解析,master执行

1
2
3
4
5
6
7
vim /etc/hosts
# 配置示例
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.13.129 master
192.168.13.130 node

b: 免密登录,master执行

1
2
3
4
5
# 生成密钥,直接回车
ssh-keygen -t rsa
# 配置节点
ssh-copy-id -i ~/.ssh/id_rsa k8n1
ssh-copy-id -i ~/.ssh/id_rsa k8n2

c:时间同步,master和node端分别执行

1
2
3
4
yum -y install ntp
ntpdate pool.ntp.org
systemctl start ntpd
systemctl enable ntpd

安装docker:安装官方要求安装

配置docker-CE源 :master node都要配置

1
2
3
4
5
6
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker

1
2
3
yum list docker-ce --showduplicates | sort -r
# 目前kubernetes1.12.2支持docker版本最多18.06,docker版本已经更新到18.9了,不能指只有yum安装最新版,要指定版本型号
yum install docker-ce-18.06.1.ce

配置自启动

1
2
3
systemctl start docker
systemctl enable docker
systemctl status docker

配置kubernetes源

1
2
3
4
5
6
7
8
9
10
11
12
13
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 查看yum源
yum repolist

在master端安装:

1
yum install -y kubelet kubeadm kubectl

查看到所需要的安装组件

1
kubeadm config images list

服务组件:

1
2
3
4
5
6
7
k8s.gcr.io/kube-apiserver:v1.12.2
k8s.gcr.io/kube-controller-manager:v1.12.2
k8s.gcr.io/kube-scheduler:v1.12.2
k8s.gcr.io/kube-proxy:v1.12.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2

从dockerHub拉取镜像:

1
2
3
4
5
6
7
docker pull guobq/kube-apiserverv1.12.2
docker pull guobq/kube-controller-manager:v1.12.2
docker pull guobq/kube-scheduler:v1.12.2
docker pull guobq/kube-proxy:v1.12.2
docker pull guobq/pause:3.1
docker pull guobq/etcd:3.2.24
docker pull guobq/coredns:1.2.2

给下载下来的镜像组件tag上和服务组件同样的标签

1
2
# example
docker tag guobq/kube-apiserverv1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2

初始化集群

只在master端执行:(注意修改为master地址)

1
2
3
4
kubeadm init \
--kubernetes-version=v1.12.2 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.13.129

master初始化之后会出现以下token,要复制下来保存好,加node要用:

kubeadm join 192.168.13.129:6443 –token bppavd.uo0spqpyn6g49fr1 –discovery-token-ca-cert-hash sha256:f526f033651e4f152d551f0950471c53401e95c0ce6b22ae63d3937dfa8fe057

如果忘记token,可以通过以下命令获得:

1
kubeadmin token list

此时root用户还不能使用kubelet控制集群需要,配置下环境变量

1
2
3
4
5
6
7
export KUBECONFIG=/etc/kubernetes/admin.conf
# 也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
# 对于非root用户
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

部署fannel

1
kubectl apply -f kube-flannel.yml

kube-flannel.yml文件放在:https://github.com/guobq/k8s/blob/master/kube-flannel/kube-flannel.yaml

配置环境变量

1
2
3
4
5
6
# root 用户执行
kubectl get pods --all-namespaces
# node节点要安装的软件:
yum install -y kubelet kubeadm kubectl
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

node节点要安装的docker镜像:

1
2
3
4
kube-proxy:v1.12.2
pause:3.1
kuberneter/coredns:1.2.2
etcd:3.2.24 #node端的ETCD可以安装,也可以不安装

执行之前上面保留下来都token加入node:

1
kubeadm join 192.168.13.129:6443 --token bppavd.uo0spqpyn6g49fr1 --discovery-token-ca-cert-hash sha256:f526f033651e4f152d551f0950471c53401e95c0ce6b22ae63d3937dfa8fe057

节点配置环境

1
2
3
4
5
6
# root用户
export KUBECONFIG=/etc/kubernetes/kubelet.conf
# 非root用户
sudo cp /etc/kubernetes/kubelet.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/kubelet.conf
export KUBECONFIG=$HOME/kubelet.conf

查看节点:master

1
kubectl get nodes

部署dashboard

获取镜像:

1
2
3
docker pull guobq/dashboard: v1.8.3
# 修改镜像tag
docker tag guobq/dashboard: v1.8.3 k8s.gcr.io/kubernetes-dashboard:v1.8.3

启动dashboard:

1
kubectl apply -f kubernetes-dashboard.yaml

kubernetes-dashboard.yaml放在:https://github.com/guobq/k8s/blob/master/dashboard/kubernetes-dashboard.yaml

部署metrics-server

获取镜像:

1
2
3
docker pull docker pull guobq/metrics-server :v0.3.1
# 修改镜像tag
docker tag guobq/metrics-server :v0.3.1 k8s.gcr.io/metrics-server :v0.3.1

启动metrics-server:

1
kubectl apply -f ./

yaml文件放在:https://github.com/guobq/k8s/tree/master/metrics-server

1.13.2版本所需镜像

1
2
3
4
5
6
7
k8s.gcr.io/kube-apiserver:v1.13.2
k8s.gcr.io/kube-controller-manager:v1.13.2
k8s.gcr.io/kube-scheduler:v1.13.2
k8s.gcr.io/kube-proxy:v1.13.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6