0%

国内环境kubernetes安装配置

环境

操作系统: CentOS Linux release 7.6.1810

内网ip: 192.168.20.15

kubernetes: v1.15.1

docker: 19.03.1

安装docker

1
2
3
4
curl -fsSL get.docker.com -o get-docker.sh
sh get-docker.sh --mirror Aliyun
systemctl enable docker
systemctl start docker

添加docker国内镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cat <<EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"debug" : true,
"insecure-registries" : [
"nas.pocketdigi.com:8083"
],
"experimental" : false,
"registry-mirrors" : [
"https://registry.docker-cn.com",
"https://reg-mirror.qiniu.com"
]
}
EOF
systemctl restart docker
docker login nas.pocketdigi.com:8083

nas.pocketdigi.com:8083 是私有仓库地址,不支持https,所以需要放到insecure-registries,没有私有仓库就不需要

添加阿里云源,安装kubelet,kubeadm,kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

配置iptables

1
2
3
4
5
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

关闭swap

注释掉/etc/fstab文件里swap分区,重启

创建集群

1
kubeadm init --apiserver-advertise-address=192.168.20.15 --pod-network-cidr=10.244.0.0/16 --apiserver-cert-extra-sans=nas.pocketdigi.com

因为服务器没有外网ip,是通过nginx转发的,必须通过-apiserver-cert-extra-sans 配置最终访问的外网ip或域名

如果是国内服务器,因为k8s仓库被墙,会报以下错误:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'


error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1

好办,我们先找台海外服务器,把相应的镜像拉下来,推到我们自己的私有仓库里,再pull,然后改tag。没有私有仓库也不要紧,我已经把1.15.1推到hub.docker.com了。

找一台能连接k8s.gcr.io的服务器:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
docker pull k8s.gcr.io/kube-apiserver:v1.15.1
docker pull k8s.gcr.io/kube-controller-manager:v1.15.1
docker pull k8s.gcr.io/kube-scheduler:v1.15.1
docker pull k8s.gcr.io/kube-proxy:v1.15.1
docker pull k8s.gcr.io/pause:3.1
docker pull k8s.gcr.io/etcd:3.3.10
docker pull k8s.gcr.io/coredns:1.3.1

docker tag k8s.gcr.io/kube-apiserver:v1.15.1 nas.pocketdigi.com:8083/k8s.gcr.io/kube-apiserver:v1.15.1
docker tag k8s.gcr.io/kube-controller-manager:v1.15.1 nas.pocketdigi.com:8083/k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag k8s.gcr.io/kube-scheduler:v1.15.1 nas.pocketdigi.com:8083/k8s.gcr.io/kube-scheduler:v1.15.1
docker tag k8s.gcr.io/kube-proxy:v1.15.1 nas.pocketdigi.com:8083/k8s.gcr.io/kube-proxy:v1.15.1
docker tag k8s.gcr.io/pause:3.1 nas.pocketdigi.com:8083/k8s.gcr.io/pause:3.1
docker tag k8s.gcr.io/etcd:3.3.10 nas.pocketdigi.com:8083/k8s.gcr.io/etcd:3.3.10
docker tag k8s.gcr.io/coredns:1.3.1 nas.pocketdigi.com:8083/k8s.gcr.io/coredns:1.3.1

docker push nas.pocketdigi.com:8083/k8s.gcr.io/kube-apiserver:v1.15.1
docker push nas.pocketdigi.com:8083/k8s.gcr.io/kube-controller-manager:v1.15.1
docker push nas.pocketdigi.com:8083/k8s.gcr.io/kube-scheduler:v1.15.1
docker push nas.pocketdigi.com:8083/k8s.gcr.io/kube-proxy:v1.15.1
docker push nas.pocketdigi.com:8083/k8s.gcr.io/pause:3.1
docker push nas.pocketdigi.com:8083/k8s.gcr.io/etcd:3.3.10
docker push nas.pocketdigi.com:8083/k8s.gcr.io/coredns:1.3.1

回到安装kubernetes的机器:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
docker pull nas.pocketdigi.com:8083/k8s.gcr.io/kube-apiserver:v1.15.1
docker pull nas.pocketdigi.com:8083/k8s.gcr.io/kube-controller-manager:v1.15.1
docker pull nas.pocketdigi.com:8083/k8s.gcr.io/kube-scheduler:v1.15.1
docker pull nas.pocketdigi.com:8083/k8s.gcr.io/kube-proxy:v1.15.1
docker pull nas.pocketdigi.com:8083/k8s.gcr.io/pause:3.1
docker pull nas.pocketdigi.com:8083/k8s.gcr.io/etcd:3.3.10
docker pull nas.pocketdigi.com:8083/k8s.gcr.io/coredns:1.3.1

docker tag nas.pocketdigi.com:8083/k8s.gcr.io/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag nas.pocketdigi.com:8083/k8s.gcr.io/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag nas.pocketdigi.com:8083/k8s.gcr.io/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag nas.pocketdigi.com:8083/k8s.gcr.io/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag nas.pocketdigi.com:8083/k8s.gcr.io/pause:3.1 k8s.gcr.io/pause:3.1
docker tag nas.pocketdigi.com:8083/k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag nas.pocketdigi.com:8083/k8s.gcr.io/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

没有条件的小伙伴,请直接使用下面的命令下载我转到hub.docker.com的镜像:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
docker pull pocketdigi/kube-apiserver:v1.15.1
docker pull pocketdigi/kube-controller-manager:v1.15.1
docker pull pocketdigi/kube-scheduler:v1.15.1
docker pull pocketdigi/kube-proxy:v1.15.1
docker pull pocketdigi/pause:3.1
docker pull pocketdigi/etcd:3.3.10
docker pull pocketdigi/coredns:1.3.1

docker tag pocketdigi/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag pocketdigi/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag pocketdigi/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag pocketdigi/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag pocketdigi/pause:3.1 k8s.gcr.io/pause:3.1
docker tag pocketdigi/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag pocketdigi/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

重新执行init:

1
kubeadm init --apiserver-advertise-address=192.168.20.15 --pod-network-cidr=10.244.0.0/16 --apiserver-cert-extra-sans=nas.pocketdigi.com

–pod-network-cidr是因为我们后面用的是flannel网络插件,默认就是这个网段

成功后得到以下信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.20.15:6443 --token m5sli6.w3z0zprk0883acuo \
--discovery-token-ca-cert-hash sha256:114fd2e850b62e7cb9924be9fb980d75e10e3e4c8e2505eddbfdd2e0f081b964

注意保存token,token会在24小时后过期,到时如果需要往集群里加节点,需要重新创建token,discovery-token-ca-cert-hash值是不会变的

1
kubeadm token create

复制kubectl所需的配置文件:

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件flannel,先从镜像节点下载镜像

1
2
3
docker pull pocketdigi/flannel:v0.11.0-amd64
docker tag pocketdigi/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

允许在master节点安装pod(默认不允许)

1
kubectl taint nodes --all node-role.kubernetes.io/master-

后期如果增加了节点,要禁止在master节点部署pod,通过以下命令恢复:

1
kubectl taint nodes kubernetes-master  node-role.kubernetes.io/master=:NoSchedule

kubernetes-master是master节点名

如果使用外部安装的rancher管理集群,会无法注册,关闭防火墙解决:

1
systemctl disable firewalld

Nginx Ingress Controller

如果要对外暴露http服务,建议安装Ingress. 官方文档

官方文档里的yaml不会暴露80,443端口,需要下载后修改。

1
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

在nginx-ingress-controller Deployment的spec.template.spec节点下增加hostNetwork: true,即使用主机网络。
如果有多台机器,不只一个master节点,需要把Deployment改成DaemonSet,以便在每个节点都部署nginx。最后修改后如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
# replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10

---

然后在master和worker节点pull镜像

1
2
docker pull pocketdigi/nginx-ingress-controller:0.25.0
docker tag pocketdigi/nginx-ingress-controller:0.25.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0

在master节点执行:

1
kubectl apply -f mandatory.yaml

添加ingress-nginx Service,这一步不同主机提供商操作不同,参考官方文档,一般虚拟机就用下面的就可以了。

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml

导入SSL证书

1
kubectl create secret tls nas-pocketdigi-com --key private.pem  --cert fullchain.pem --namespace prod

nas-pocketdigi-com是证书名,我这个是泛域名证书,prod是指定导入的命名空间,需要先创建

测试

nginx.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: prod
spec:
selector:
app: nginx
type: ClusterIP
ports:
- name: default
port: 80
protocol: TCP
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
namespace: prod
spec:
tls:
- hosts:
- nginx.nas.pocketdigi.com
secretName: nas-pocketdigi-com
rules:
- host: nginx.nas.pocketdigi.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx
namespace: prod
spec:
progressDeadlineSeconds: 180
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
imagePullPolicy: Always
image: nginx:1.17.1-alpine
resources:
requests:
memory: "256Mi"
cpu: "0.1"
limits:
memory: "384Mi"
cpu: "0.8"
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 120
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 2
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 20
successThreshold: 2
timeoutSeconds: 3
1
kubectl apply -f nginx.yaml

如果nginx.nas.pocketdigi.com已经解析到这台主机,现在应该能正常访问到nginx。