kubernetes的service

时间:Dec. 10, 2018 分类:

目录:

service

controller会通过动态创建和销毁Pod来维持整体的应用,并且Pod有自己的IP,当controller替换Pod的时候就会造成IP地址的变更,直接通过Pod来提供服务是不可能的,就需要通过Service的方式来解决这个问题

Service在kubernetes中逻辑上代表了一组Pod,也是通过label来进行实现,Service有自己的IP地址,并且不会变更,请求可以请求Service的IP,而kubernetes只需要维护Service和Pod的映射关系即可

先创建Pod

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: httpd
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: httpd
    spec:
      containers:
      - name: httpd
        image: httpd
        ports:
          - containerPort: 80

通过yaml进行apply

[why@why-01 ~]$ kubectl apply -f httpd.yml 
deployment.apps/httpd created
[why@why-01 ~]$ kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
httpd-56f68bf886-26h5n   1/1     Running   0          19s   10.244.1.15   why-02   <none>           <none>
httpd-56f68bf886-9f8xn   1/1     Running   0          19s   10.244.2.49   why-03   <none>           <none>
httpd-56f68bf886-z56rh   1/1     Running   0          19s   10.244.1.16   why-02   <none>           <none>

为可以看到每个Pod都被分配了不同的IP,对于每个Pod都可以进行访问

[why@why-01 ~]$ curl 10.244.1.15
<html><body><h1>It works!</h1></body></html>
[why@why-01 ~]$ curl 10.244.2.49
<html><body><h1>It works!</h1></body></html>
[why@why-01 ~]$ curl 10.244.1.16
<html><body><h1>It works!</h1></body></html>

配置Service

apiVersion: v1
kind: Service
metadata: 
  name: httpd-svc
spec:
  selector:
    app: httpd
  ports: 
  - protocol: TCP
    port: 8000
    targetPort: 80
  • 通过selector选择label为app:httpd的Pod
  • portsj通过TCP将Service的8000端口映射到Pod的80端口
[why@why-01 ~]$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
httpd-svc    ClusterIP   10.103.120.16   <none>        8000/TCP   16s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    2d12h         

通过service IP进行访问

[why@why-01 ~]$ curl 10.103.120.16:8000
<html><body><h1>It works!</h1></body></html>

也可以通过describe来查看与Pod的对于关系

[why@why-01 ~]$ kubectl describe service httpd-svc
Name:              httpd-svc
Namespace:         default
Labels:            <none>
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"httpd-svc","namespace":"default"},"spec":{"ports":[{"port":8000,"...
Selector:          app=httpd
Type:              ClusterIP
IP:                10.103.120.16
Port:              <unset>  8000/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.15:80,10.244.1.16:80,10.244.2.49:80
Session Affinity:  None
Events:            <none>

Service原理

原理就是通过iptables

通过iptables-save打印防火墙规则,可以看到相关的规则

-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.103.120.16/32 -p tcp -m comment --comment "default/httpd-svc: cluster IP" -m tcp --dport 8000 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.103.120.16/32 -p tcp -m comment --comment "default/httpd-svc: cluster IP" -m tcp --dport 8000 -j KUBE-SVC-RL3JAE4GN7VOGDGP
  • 对于访问集群内的Pod(源地址为10.244.0.0/16)请求10.103.120.16/32的8080端口走规则KUBE-MARK-MASQ,即是允许
  • 对于目的地址为10.103.120.16/32端口为8080的请求走规则KUBE-SVC-RL3JAE4GN7VOGDGP

再看一下KUBE-SVC-RL3JAE4GN7VOGDGP规则

-A KUBE-SVC-RL3JAE4GN7VOGDGP -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-QJTXM254XNYIONR6
-A KUBE-SVC-RL3JAE4GN7VOGDGP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-54RWZMUYIMEODILT
-A KUBE-SVC-RL3JAE4GN7VOGDGP -j KUBE-SEP-FEAYXTP74T7ABRAK
  1. 1/3的概率跳转到规则KUBE-SEP-QJTXM254XNYIONR6
  2. 1/3的概率(剩下2/3的1/2)跳转到规则KUBE-SEP-54RWZMUYIMEODILT
  3. 1/3的概率(余下的)跳转到规则KUBE-SEP-FEAYXTP74T7ABRAK
-A KUBE-SEP-54RWZMUYIMEODILT -s 10.244.1.16/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-54RWZMUYIMEODILT -p tcp -m tcp -j DNAT --to-destination 10.244.1.16:80
-A KUBE-SEP-FEAYXTP74T7ABRAK -s 10.244.2.49/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FEAYXTP74T7ABRAK -p tcp -m tcp -j DNAT --to-destination 10.244.2.49:80
-A KUBE-SEP-QJTXM254XNYIONR6 -s 10.244.1.15/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-QJTXM254XNYIONR6 -p tcp -m tcp -j DNAT --to-destination 10.244.1.15:80

使用类似轮询的负载均衡策略将请求到Server的请求转发到Pod上,集群每个节点都配置了相同的iptables规则。

通过Dns访问Pod

在1.13版本已经被替换为coredns,之前是kube-dns

[why@why-01 ~]$ kubectl get deployments --all-namespaces
NAMESPACE     NAME      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns   2/2     2            2           2d23h

访问集群中的Pod可以通过

每当有新的Service被创建,coredns会添加该Service的DNS记录。Cluster中的Pod可以通过<SERVICE_NAME>.<NAMESPACE_NAME>访问Service

[why@why-01 ~]$ kubectl run busybox --rm -ti --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.

/ # wget httpd-svc.default:8000
Connecting to httpd-svc.default:8000 (10.103.120.16:8000)
index.html           100% |***********************************************************************************************************************************************|    45  0:00:00 ETA
/ # nslookup httpd-svc.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      httpd-svc.default.svc.cluster.local
Address 1: 10.103.120.16 httpd-svc.default.svc.cluster.local
/ # nslookup httpd-svc.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      httpd-svc.default
Address 1: 10.103.120.16 httpd-svc.default.svc.cluster.local
/ # nslookup httpd-svc
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      httpd-svc
Address 1: 10.103.120.16 httpd-svc.default.svc.cluster.local

新版的busybox有一些问题,可使用指定的busybox:1.28.3

dns的域名就是kube-dns.kube-system.svc.cluster.local,而对应的httpd-svchttpd-svc.defaulthttpd-svc.default.svc.cluster.local都可以被解析,指向的都是创建的service。

默认<NAMESPACE_NAME>使用的default,对于其他的namespace可以指定

外部访问Service

  • NodePort Service通过Cluster节点的静态端口对外提供服务。Cluster外部可以通过<NodeIP>:<NodePort>访问Service
  • LoadBalancer Service利用cloud provider特有的load balancer对外提供服务,cloud provider负责将load balancer的流量导向Service。目前支持的cloud provider有GCP、AWS、Azur等。

还是httpd-service.yml

apiVersion: v1
kind: Service
metadata: 
  name: httpd-svc
spec:
  type: NodePort
  selector:
    app: httpd
  ports: 
  - protocol: TCP
    port: 8000
    targetPort: 80

将type定义为NodePort

[why@why-01 ~]$ kubectl apply -f httpd-service.yml 
service/httpd-svc created
[why@why-01 ~]$ kubectl get svc httpd-svc 
NAME        TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
httpd-svc   NodePort   10.105.205.141   <none>        8000:32219/TCP   14s

可以看到集群启用了一个32219的端口,这里8000:32219/TCP中8000是集群的端口,而32219是Node上的端口,通过Node的32219就可以访问Service了,也就可以通过负载等在集群外部请求到service

一般kubernetes会在30000-32767之间分配一个可用的端口,不过可以在ports中指定nodePort

apiVersion: v1
kind: Service
metadata: 
  name: httpd-svc
spec:
  type: NodePort
  selector:
    app: httpd
  ports: 
  - protocol: TCP
    nodePort: 30000
    port: 8000
    targetPort: 80

查看一下本地端口

[why@why-01 ~]$ ss -nlpt | grep 32219
LISTEN     0      128         :::32219                   :::*           

可以看到本地的监听了32219的端口,可以检验一下

[why@why-01 ~]$ curl why-01:32219
<html><body><h1>It works!</h1></body></html>
[why@why-01 ~]$ curl why-02:32219
<html><body><h1>It works!</h1></body></html>
[why@why-01 ~]$ curl why-03:32219
<html><body><h1>It works!</h1></body></html>

原理一样是用来iptables

-A KUBE-NODEPORTS -p tcp -m comment --comment "default/httpd-svc:" -m tcp --dport 32219 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/httpd-svc:" -m tcp --dport 32219 -j KUBE-SVC-RL3JAE4GN7VOGDGP
  • 允许目的端口为32219的访问
  • 转发到规则KUBE-SVC-RL3JAE4GN7VOGDGP

KUBE-SVC-RL3JAE4GN7VOGDGP规则就是当时讲Service时的规则,进行负载均衡

滚动更新(Rolling Updata)

滚动更新是一次只更新一小部分副本,成功后,再更新更多的副本,最终完成所有副本的更新。滚动更新的最大的好处是零停机,整个更新过程始终有副本在运行,从而保证了业务的连续性

[why@why-01 ~]$ kubectl get deployments httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
httpd   3/3     3            3           11h   httpd        httpd    app=httpd

可以看到之前的镜像版本为httpd,即lasted,详细情况下

[why@why-01 ~]$ kubectl describe deployments httpd 
Name:                   httpd
Namespace:              default
CreationTimestamp:      Sun, 09 Dec 2018 03:59:41 +0800
Labels:                 app=httpd
Annotations:            deployment.kubernetes.io/revision: 1
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"httpd","namespace":"default"},"spec":{"replicas":3,"...
Selector:               app=httpd
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=httpd
  Containers:
   httpd:
    Image:        httpd
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   httpd-56f68bf886 (3/3 replicas created)
Events:          <none>

可以看到当前ReplicaSet为httpd-56f68bf886

现在如果更新为httpd:2.2.31,直接通过的修改yml中的image版本即可

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: httpd
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: httpd
    spec:
      containers:
      - name: httpd
        image: httpd:2.2.31
        ports:
          - containerPort: 80

还是apply就行

[why@why-01 ~]$ kubectl get deployments httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     1            3           11h   httpd        httpd:2.2.31   app=httpd
[why@why-01 ~]$ kubectl get deployments httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     2            3           11h   httpd        httpd:2.2.31   app=httpd
[why@why-01 ~]$ kubectl get deployments httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           11h   httpd        httpd:2.2.31   app=httpd

可以看到明显的UP-TO-DATE数量从1到3

[why@why-01 ~]$ kubectl describe deployments httpd
Name:                   httpd
Namespace:              default
CreationTimestamp:      Sun, 09 Dec 2018 03:59:41 +0800
Labels:                 app=httpd
Annotations:            deployment.kubernetes.io/revision: 2
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"httpd","namespace":"default"},"spec":{"replicas":3,"...
Selector:               app=httpd
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=httpd
  Containers:
   httpd:
    Image:        httpd:2.2.31
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   httpd-8cf6ccf7d (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  6m55s  deployment-controller  Scaled up replica set httpd-8cf6ccf7d to 1
  Normal  ScalingReplicaSet  6m45s  deployment-controller  Scaled down replica set httpd-56f68bf886 to 2
  Normal  ScalingReplicaSet  6m45s  deployment-controller  Scaled up replica set httpd-8cf6ccf7d to 2
  Normal  ScalingReplicaSet  6m34s  deployment-controller  Scaled down replica set httpd-56f68bf886 to 1
  Normal  ScalingReplicaSet  6m34s  deployment-controller  Scaled up replica set httpd-8cf6ccf7d to 3
  Normal  ScalingReplicaSet  6m30s  deployment-controller  Scaled down replica set httpd-56f68bf886 to 0

在Event中可以看到是在httpd-8cf6ccf7d创建完成一个新的Pod,在httpd-56f68bf886中下掉一个Pod。

Kubernetes也提供了两个参数maxSurgemaxUnavailable来精细控制Pod的替换数量

回滚

kubectl apply每次更新应用时Kubernetes都会记录下当前的配置,保存为一个revision(版次),这样就可以回滚到某个特定revision

可以看到当前httpd的deployment的滚动升级历史,是没有记录的

[why@why-01 ~]$ kubectl rollout history deployment httpd
deployment.extensions/httpd 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

默认配置下,Kubernetes只会保留最近的几个revision,可以在Deployment配置文件中通过revisionHistoryLimit属性增加revision数量

现在配置httpd.v1.yml,httpd.v2.yml和httpd.v3.yml,分别对应httpd镜像版本的2.4.16,2.4.17和2.4.18

[why@why-01 ~]$ kubectl apply -f httpd.v1.yml --record
deployment.apps/httpd configured
[why@why-01 ~]$ kubectl get deployments httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           12h   httpd        httpd:2.4.16   app=httpd
[why@why-01 ~]$ kubectl apply -f httpd.v2.yml --record
deployment.apps/httpd unchanged
[why@why-01 ~]$ kubectl get deployments httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           12h   httpd        httpd:2.4.17   app=httpd
[why@why-01 ~]$ kubectl apply -f httpd.v3.yml --record
deployment.apps/httpd configured
[why@why-01 ~]$ kubectl get deployments httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           12h   httpd        httpd:2.4.18   app=httpd

再查看一下会有详细的记录

[why@why-01 ~]$ kubectl rollout history deployment httpd
deployment.extensions/httpd 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         kubectl apply --filename=httpd.v1.yml --record=true
4         kubectl apply --filename=httpd.v2.yml --record=true
5         kubectl apply --filename=httpd.v3.yml --record=true

回滚一下版本4,这样理论上镜像版本为2.4.17

[why@why-01 ~]$ kubectl rollout undo deployment httpd --to-revision=4
deployment.extensions/httpd rolled back
[why@why-01 ~]$ kubectl get deployments httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           12h   httpd        httpd:2.4.17   app=httpd

这时如果再回滚版本4就会失败

[why@why-01 ~]$ kubectl rollout undo deployment httpd --to-revision=4
error: unable to find specified revision 4 in history
[why@why-01 ~]$ kubectl rollout history deployment httpd
deployment.extensions/httpd 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         kubectl apply --filename=httpd.v1.yml --record=true
5         kubectl apply --filename=httpd.v3.yml --record=true
6         kubectl apply --filename=httpd.v2.yml --record=true

可以看到当时的4变为6了

健康检测(Health Check)

kubernetes具有可以进行自愈,实现方式为自动重启故障的容器,并可以利用LivenessReadiness探测机制设置更精细的健康检查

可以实现

  • 零停机部署
  • 避免部署无效的镜像
  • 更加安全的滚动升级

默认的健康检查

默认情况下每个容器的启动的时候会启动一个进程,此进程通过dockerfile的CMD或者ENTRYPOINT执行,如果状态码为非零则认为容器故障。

如果需要模拟可以通过/bin/sh -c sleep sleep 10; exit 1来模拟

Liveness探测

通过自定义来判断容器是否健康

liveness.yml

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness
spec:
  restartPolicy: OnFailure
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c 
    - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 10
      periodSeconds: 5
  • initialDelaySeconds: 10指定容器启动10s之后开始执行Liveness探测
  • periodSeconds: 5指定每5s执行一次Liveness探测。Kubernetes如果连续执行3次Liveness探测均失败,则会杀掉并重启容器

apply之后查看Pod的Event

[why@why-01 ~]$ kubectl describe pod liveness
...省略部分
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  39s   default-scheduler  Successfully assigned default/liveness to why-03
  Normal   Pulling    38s   kubelet, why-03    pulling image "busybox"
  Normal   Pulled     34s   kubelet, why-03    Successfully pulled image "busybox"
  Normal   Created    34s   kubelet, why-03    Created container
  Normal   Started    34s   kubelet, why-03    Started container
  Warning  Unhealthy  0s    kubelet, why-03    Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory

35s后显示已经检测失败

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  79s                default-scheduler  Successfully assigned default/liveness to why-03
  Normal   Pulled     74s                kubelet, why-03    Successfully pulled image "busybox"
  Normal   Created    74s                kubelet, why-03    Created container
  Normal   Started    74s                kubelet, why-03    Started container
  Warning  Unhealthy  30s (x3 over 40s)  kubelet, why-03    Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
  Normal   Pulling    0s (x2 over 78s)   kubelet, why-03    pulling image "busybox"
  Normal   Killing    0s                 kubelet, why-03    Killing container with id docker://liveness:Container failed liveness probe.. Container will be killed and recreated.

然后会被kill并重启,不过经历的时间有点长为

[why@why-01 ~]$ kubectl get pod liveness
NAME       READY   STATUS    RESTARTS   AGE
liveness   1/1     Running   6          9m14s

Readiness探测

  • 用户通过Liveness探测可以告诉Kubernetes什么时候通过重启容器实现自愈
  • Readiness探测则是告诉Kubernetes什么时候可以将容器加入到Service负载均衡池中,对外提供服务

只需要将刚在的livenessProbe改为readinessProbe即可了

在开始的时候还是READY状态

[why@why-01 ~]$ kubectl get pods readiness 
NAME       READY   STATUS    RESTARTS   AGE
readiness    1/1     Running   0          39s

而检测超过3次就不是READY状态了

[why@why-01 ~]$ kubectl get pods readiness 
NAME       READY   STATUS    RESTARTS   AGE
readiness    0/1     Running   0          80s

对比Liveness和Readiness

  1. Liveness探测和Readiness探测是两种Health Check机制,如果不特意配置,Kubernetes 将对两种探测采取相同的默认行为,即通过判断容器启动进程的返回值是否为零来判断探测是否成功。
  2. 两种探测的配置方法完全一样,支持的配置参数也一样。不同之处在于探测失败后的行为:Liveness探测是重启容器;Readiness探测则是将容器设置为不可用,不接收Service 转发的请求。
  3. Liveness探测和Readiness探测是独立执行的,二者之间没有依赖,所以可以单独使用,也可以同时使用。用Liveness探测判断容器是否需要重启以实现自愈;用Readiness 探测判断容器是否已经准备好对外提供服务。

对于web服务可以直接用这种httpGet的方式检测

readinessProbe:
  httpGet:
    scheme: HTTP
    path: /healthy
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 5

httpGet判断条件是http请求的返回代码在200-400之间

  • schema 指定协议,支持HTTP(默认值)和HTTPS
  • path 指定访问路径
  • port 指定端口

回滚中健康检测

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: test
spec:
  replicas: 10
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: readiness
        image: busybox
        args:
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 6000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

apply后查看Pod的状态

[why@why-01 ~]$ kubectl get pod 
NAME                    READY   STATUS    RESTARTS   AGE
test-5c4bf5897d-599dq   1/1     Running   0          7m32s
test-5c4bf5897d-9cd5s   1/1     Running   0          7m32s
test-5c4bf5897d-gg7d5   1/1     Running   0          7m32s
test-5c4bf5897d-hm9kl   1/1     Running   0          7m32s
test-5c4bf5897d-kdvvb   1/1     Running   0          7m32s
test-5c4bf5897d-m4nxt   1/1     Running   0          7m32s
test-5c4bf5897d-pcw8q   1/1     Running   0          7m32s
test-5c4bf5897d-v8zb7   1/1     Running   0          7m32s
test-5c4bf5897d-wqx8c   1/1     Running   0          7m32s
test-5c4bf5897d-zvjll   1/1     Running   0          7m32s

通过yml进行版本升级

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: test
spec:
  replicas: 10
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: readiness
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 6000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

由于新副本中不存在/tmp/healthy,是无法通过Readiness探测的,再度apply的时候就可以看到会陷入僵局

[why@why-01 ~]$ kubectl get pod 
NAME                    READY   STATUS        RESTARTS   AGE
test-5c4bf5897d-599dq   1/1     Running       0          11m
test-5c4bf5897d-9cd5s   0/1     Terminating   0          11m
test-5c4bf5897d-gg7d5   1/1     Running       0          11m
test-5c4bf5897d-hm9kl   1/1     Running       0          11m
test-5c4bf5897d-kdvvb   1/1     Running       0          11m
test-5c4bf5897d-m4nxt   1/1     Running       0          11m
test-5c4bf5897d-pcw8q   0/1     Terminating   0          11m
test-5c4bf5897d-v8zb7   1/1     Running       0          11m
test-5c4bf5897d-wqx8c   1/1     Running       0          11m
test-5c4bf5897d-zvjll   1/1     Running       0          11m
test-67d8c54bcb-7j56s   0/1     Running       0          32s
test-67d8c54bcb-gdnfn   0/1     Running       0          32s
test-67d8c54bcb-lf92q   0/1     Running       0          32s
test-67d8c54bcb-pbcpg   0/1     Running       0          32s
test-67d8c54bcb-pv4xd   0/1     Running       0          32s
[why@why-01 ~]$ kubectl get deployments test 
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
test   8/10    5            8           11m

可以看到

  • 新增了5个Pod为新的副本,处于Not READY状态
  • 旧的副本由最初的10个减少到8个

kubectl get deployments test

  • READY 8/10为目前10个副本有8个提供服务
  • UP-TO-DATE 5表示已经更新完的副本数
  • AVAILABLE 8表示提供服务的副本数

因为新的副本无法通过Readiness的探测,所以这个状态会一直保持下去,这是模拟失败的滚动更新失败的应用场景,但是为什么会销毁2个旧副本而生成5个新副本呢

滚动更新通过参数maxSurge和maxUnavailable来控制副本替换的数量

maxSurge

参数控制滚动更新过程中副本总数的超过DESIRED的上限,可以为整数也可以为百分比,如果是百分比遵循向上取整的原则,默认为25%

对于刚才的操作DESIRED为10,副本最大值为roundUp( 10 + 10 * 25% ) = 13

maxUnavailable

参数控制滚动更新过程中不可用的副本相占DESIRED的上限,可以为整数也可以为百分比,如果是百分比遵循向下取整的原则,默认为25%

对于刚才的操作就是10 - roundDown( 10 * 25% ) = 8

对于刚才正常情况下应该是

  1. 首先创建3个新副本使副本总数达到13个
  2. 然后销毁2个旧副本使可用的副本数降到8个
  3. 当这2个旧副本成功销毁后,可再创建2个新副本,使副本总数保持为13个
  4. 当新副本通过Readiness探测后,会使可用副本数增加,超过8个
  5. 进而可以继续销毁更多的旧副本,使可用副本数回到8个
  6. 旧副本的销毁使副本总数低于13,这样就允许创建更多的新副本
  7. 这个过程会持续进行,最终所有的旧副本都会被新副本替换,滚动更新完成

但是在第四步就卡住了

这时如果apply的时候指定了--record=true就可以是使用kubectl rollout undo进行回滚

调整maxSurge和maxUnavailable

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: test
spec:
  strategy:
    rollingUpdate:
      maxSurge: 35%
      maxUnavailable: 35%
  replicas: 10
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: readiness
        image: busybox
        args:
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 6000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5