Kubernetes初识

时间:Oct. 22, 2017 分类:

目录:

kubernetes介绍

kubernetes横空出世背后的秘密

Docker与CoreOS的恩怨情仇

  • 2013年2月,Docker建立了官方网站发布其首个演示版本。
  • 同年3月美国加州Alex Polvi正在自己的车库开始他们的第二次创业,就是CoreOS

因为linux在更新的时候有一个服务中断和重启的过程,CoreOS是为了提供能够从任意操作系统版本稳定无缝的升级到最新版系统的能力,即解决程序与操作系统之间的耦合问题,当时名不见经传的docker被Alex作为了这个操作系统支持的第一套应用程序隔离方案,不久以后,他们成立了以自己的系统发行版,被命名为CoreOS

KVM的作者也是把KVM卖个redhat,也研发了支持内核的OSV操作系统,不知道未来的OSV路是怎样。

最开始的时候Docker和CoreOS携手,后来CoreOS的boss认为Docker只是一个组件来构建平台,是一个帮助构建的工具,而后来Docker越做越大,有了网络,往分布式云服务发展,以及上传下载镜像的服务,打造了属于Docker的生态圈,拿到了融资以及google等大牛的支持。

而CoreOS一直忙于为Docker提供技术支持服务。

其实因为利益的原因导致了其分道扬镳。

Alex认为docker从原本做业界标准化容器变成打造一款以容器为中心的企业服务平台,2014年12月,CoreOS推出了自己的标准化产品Rocket,并表示docker在安全性和可组合性有根本性缺陷,而Rocket就是为了弥补这些缺陷。

CoreOS的联合创始人表示在Rocket成熟后将不再支持docker,也就导致了2015年整个上半年的人心惶惶。

2015年5月4日,Redhat,google和vmware等支持CoreOS,是因为docker没有引入其他人一起设计的可能。进而docker和CoreOS决裂。

最后容器之争

cgroup最初是由google提出,后来整合进Linux内核,Android操作系统也凭借此技术为每个应用程序分配cgroup进行程序隔离。

google其实在很早之前就使用了borg和omega为容器技术。

kubernetes就是borg的简化版。

其实在2013年10月3日,google发布了Linux容器版本lmctfy,docker负责人也从中吸取了很多有用的东西,当然google的这个项目失败了。

Linux基金会牵头协调了Docker和CoreOS,进而握手言和,创建了OCP项目,docker容器提供了源代码,并以docker为标准,不过因为委员会成员都与google关系很好,docker的发展方向已经在google手中。

kubernetes的历程

14年6月,google开源kubernetes,以下简称k8s 14年8月,Mesosphere将k8s作为frame整合到mesos生态,vmware也加入了 15年1月,google将k8s引入Openstack,可以在openstack上部署k8s 15年5月,google和CoreOs发布了Tectonic,k8s和CoreOS结合 15年7月,k8s v1.0发布

kubernetes架构

kubernetes重要概念

Namingspace

是什么

  • 一种隔离手段
  • 不同的Namingspace中的资源不能互相访问
  • 是多租户的一种常见解决方案

拥有

  • 不同的名称

关联

  • resource

Resource

是什么

  • 集群中的一种资源对象
  • 处于某个命名空间
  • 可以持久化存储到etcd中
  • 资源是有状态的
  • 资源是可以关联的
  • 资源是可以先定使用配额的

拥有

  • 名称
  • 类型
  • Lable,用于选择
  • 资源定义模板
  • 资源实例
  • 资源生命周期
  • 事件记录events
  • 特定的一些动作

关联

  • ResourceQuta
  • Namingspace

而kubernetes中几乎所有重要的概念都是资源

Lable

是什么

  • 一个key-value的键值对
  • 可以任意定义
  • 可以贴在Resource对象上
  • 用来过滤和选择某种资源对象

拥有

  • key-value

关联

  • Resource
  • Lable Selector

Master节点

是什么

  • k8s集群的管理节点
  • 负责集群的管理
  • 提供集群的资源数据访问入口

拥有

  • etcd存储服务(可选)
  • ApiServer守护进程
  • Scheduler服务进程

关联

  • 工作节点Node

Node节点

是什么

  • 是一台Linux主机
  • k8s中的工作节点(负载节点)
  • 启动和管理k8s中的pod实例
  • 接受Master节点的管理指令
  • 具有一定的自修复能力

拥有

  • 名称和IP
  • 系统资源信息(cpu内存等)
  • 运行docker eninge服务
  • 运行了k8s的守护进程kuelet
  • 运行了k8s的负载均衡器kube-proxy
  • 拥有状态

关联

  • master节点

Pod

是什么

  • Pod是一组容器的一个单一集合体
  • k8s中最小的任务调度单元
  • 可以被调度到任意Node上进行恢复
  • 一个Pod里所有容器共享资源(网络,volumes)

  • 拥有

  • 名称和IP
  • 状态
  • 拥有Lable
  • 一组容器进程
  • 一些volumes

关联

  • Node
  • service

Service

是什么

  • 一个微服务
  • 容器方式隔离
  • TCP服务
  • 通常无状态
  • 可以部署多个实例同时服务
  • 属于内部的概念,默认无法访问
  • 可以滚动升级

拥有

  • 一个唯一的名字
  • 一个虚拟的访问地址 IP地址+PORT
  • 一个外部系统访问的映射端口NodePort
  • 对应后端分布于不同Node一组服务容器进程

关联

  • 多个相同的Lable的Pod

Replication Controller

是什么

  • Pod副本控制器
  • 限定某种Pod的当前实例的个数
  • 属于服务集群的控制范畴
  • 服务的滚动升级靠它实现

拥有

  • 一个标签选择器,用于选择目标Pod
  • 一个数字,表明目标Pod的期望实例数
  • 一个Pod模板,用于创建Pod

关联

  • 多个相同的Lable的Pod

Volumes

是什么

  • Pod上的存储卷
  • 能被Pod内的多个容器使用

拥有

  • 名字
  • 存储空间

关联

  • Pod

volumes是Pod中能够被多个容器访问的共享目录。

Kubernetes的Volumes和Docker的Volumes类似,但不完全相同。Kubernetes中的Volumes与Pod的生命周期相同,而不与容器的声明周期相关,当容器停止或者重启的时候也不会丢失,另外Kubernetes支持多种类型的Volumes,并且一个Pod可以同时使用多个Volumes。

emptyDir

emptyDir Volumes是在Pod分配到Node时创建的,初始内容为空,同一个Pod中所有容器可以读写empty中相同的文件,当Pod从Node上移除时,emptyDir中的数据也会永久删除。

emptyDir的作用

  • 临时空间,例如程序运行时的临时目录,不需要永久保留。
  • 长时间任务的中间过程CheckPoint临时保存目录
  • 一个容器需要从另一个容器中获取数据的目录(多容器共享目录)
hostPath

在Pod上挂载宿主机上的文件或目录

hostPath用于

  • 容器应用程序生成日志文件需要永久保存,可以使用宿主机的告诉文件系统进行存储
  • 需要访问宿主机上的Docker引擎内部数据结构的容器应用,可以通过定义hostPath宿主机/var/lib/docker目录使容器内部应用可以直接访问Docker的文件系统
  • 不过在不同Node上就有相同配置的Pod可以回因为宿主机上的目录和文件不同而导致对Volumes上的目录和文件访问结果不一致

安装kebunetes

环境采用腾讯云三台1核1G操作系统为CentOS7.2的64位操作系统做演示环境

外网IP地址 内网IP地址 主机名 用途
140.143.187.188 10.181.4.225 why-01 k8s master
140.143.184.211 10.181.13.57 why-02 k8s node
140.143.182.69 10.181.5.146 why-03 k8s node

环境准备

环境准备过程需要在三台主机上进行配置

修改主机名

[root@VM_4_225_centos ~]# hostnamectl 
   Static hostname: VM_4_225_centos
         Icon name: computer-vm
           Chassis: vm
        Machine ID: f9d400c5e1e8c3a8209e990d887d4ac1
           Boot ID: c0c434856a93496f9b92eac773f2bec7
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-514.26.2.el7.x86_64
      Architecture: x86-64
[root@VM_4_225_centos ~]# hostnamectl set-hostname why-01
[root@VM_4_225_centos ~]# hostnamectl 
   Static hostname: why-01
         Icon name: computer-vm
           Chassis: vm
        Machine ID: f9d400c5e1e8c3a8209e990d887d4ac1
           Boot ID: c0c434856a93496f9b92eac773f2bec7
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-514.26.2.el7.x86_64
      Architecture: x86-64

配置主机映射关系

[root@VM_4_225_centos ~]# vi /etc/hosts
127.0.0.1  localhost  localhost.localdomain  VM_4_225_centos
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.181.4.225 why-01
10.181.13.57 why-02
10.181.5.146 why-03

测试

[root@VM_4_225_centos ~]# ping why-01
PING why-01 (10.181.4.225) 56(84) bytes of data.
64 bytes from why-01 (10.181.4.225): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from why-01 (10.181.4.225): icmp_seq=2 ttl=64 time=0.023 ms
64 bytes from why-01 (10.181.4.225): icmp_seq=3 ttl=64 time=0.023 ms
^C
--- why-01 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.012/0.019/0.023/0.006 ms
[root@VM_4_225_centos ~]# ping why-02
PING why-02 (10.181.13.57) 56(84) bytes of data.
64 bytes from why-02 (10.181.13.57): icmp_seq=1 ttl=64 time=0.274 ms
64 bytes from why-02 (10.181.13.57): icmp_seq=2 ttl=64 time=0.195 ms
^C
--- why-02 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.195/0.234/0.274/0.042 ms
[root@VM_4_225_centos ~]# ping why-03
PING why-03 (10.181.5.146) 56(84) bytes of data.
64 bytes from why-03 (10.181.5.146): icmp_seq=1 ttl=64 time=0.296 ms
64 bytes from why-03 (10.181.5.146): icmp_seq=2 ttl=64 time=0.217 ms
^C
--- why-03 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.217/0.256/0.296/0.042 ms

关闭防火墙

[root@why-01 ~]# systemctl stop firewalld.service
[root@why-01 ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

Oct 18 11:21:17 why-01 systemd[1]: Starting firewalld - dynamic firewall daemon...
Oct 18 11:21:17 why-01 systemd[1]: Started firewalld - dynamic firewall daemon.
Oct 18 11:24:48 why-01 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Oct 18 11:24:53 why-01 systemd[1]: Stopped firewalld - dynamic firewall daemon.

master节点

安装依赖服务

[root@why-01 ~]# yum install -y  kubernetes
[root@why-01 ~]# yum install -y  etcd

修改配置文件

修改config

[root@why-01 ~]# vi /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://why-01:8080"
KUBE_ETCD_SERVERS="--etcd_servers=http://why-01:4001"

主要是KUBE_MASTERKUBE_ETCD_SERVERS

KUBE_MASTER="--master=http://why-01:8080"
KUBE_ETCD_SERVERS="--etcd_servers=http://why-01:4001"

定义apiserver

[root@why-01 ~]# vi /etc/kubernetes/apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

启动master节点服务

[root@why-01 ~]# systemctl start etcd
[root@why-01 ~]# systemctl start kube-apiserver
[root@why-01 ~]# systemctl start kube-controller-manager
[root@why-01 ~]# systemctl start kube-scheduler
[root@why-01 ~]# ps -ef | grep kube
kube      8904     1  0 15:08 ?        00:00:02 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --port=8080 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
kube      8914     1  0 15:08 ?        00:00:02 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://why-01:8080
kube      8924     1  0 15:08 ?        00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://why-01:8080
root     10098 10980  0 15:24 pts/0    00:00:00 grep --color=auto kube

启动日志

[root@why-01 ~]# grep kube /var/log/messages
Oct 18 15:08:40 localhost kube-apiserver: Flag --port has been deprecated, see --insecure-port instead.
Oct 18 15:08:40 localhost kube-apiserver: I1018 15:08:40.769511    8904 config.go:562] Will report 10.181.4.225 as public IP address.
Oct 18 15:08:40 localhost kube-apiserver: W1018 15:08:40.770538    8904 handlers.go:50] Authentication is disabled
Oct 18 15:08:40 localhost kube-apiserver: E1018 15:08:40.823097    8904 reflector.go:199] k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get http://0.0.0.0:8080/api/v1/resourcequotas?resourceVersion=0: dial tcp 0.0.0.0:8080: getsockopt: connection refused
Oct 18 15:08:40 localhost kube-apiserver: E1018 15:08:40.823160    8904 reflector.go:199] k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get http://0.0.0.0:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 0.0.0.0:8080: getsockopt: connection refused
Oct 18 15:08:40 localhost kube-apiserver: E1018 15:08:40.823210    8904 reflector.go:199] k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get http://0.0.0.0:8080/api/v1/serviceaccounts?resourceVersion=0: dial tcp 0.0.0.0:8080: getsockopt: connection refused
Oct 18 15:08:40 localhost kube-apiserver: E1018 15:08:40.837268    8904 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.LimitRange: Get http://0.0.0.0:8080/api/v1/limitranges?resourceVersion=0: dial tcp 0.0.0.0:8080: getsockopt: connection refused
Oct 18 15:08:40 localhost kube-apiserver: [restful] 2017/10/18 15:08:40 log.go:30: [restful/swagger] listing is available at https://10.181.4.225:6443/swaggerapi/
Oct 18 15:08:40 localhost kube-apiserver: [restful] 2017/10/18 15:08:40 log.go:30: [restful/swagger] https://10.181.4.225:6443/swaggerui/ is mapped to folder /swagger-ui/
Oct 18 15:08:40 localhost kube-apiserver: E1018 15:08:40.857939    8904 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.Namespace: Get http://0.0.0.0:8080/api/v1/namespaces?resourceVersion=0: dial tcp 0.0.0.0:8080: getsockopt: connection refused
Oct 18 15:08:40 localhost kube-apiserver: I1018 15:08:40.915331    8904 serve.go:95] Serving securely on 0.0.0.0:6443
Oct 18 15:08:40 localhost kube-apiserver: I1018 15:08:40.915382    8904 serve.go:109] Serving insecurely on 0.0.0.0:8080
Oct 18 15:08:41 localhost kube-apiserver: I1018 15:08:41.934150    8904 trace.go:61] Trace "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started 2017-10-18 15:08:41.13189386 +0800 CST):
Oct 18 15:08:41 localhost kube-apiserver: [16.966µs] [16.966µs] About to convert to expected version
Oct 18 15:08:41 localhost kube-apiserver: [47.84µs] [30.874µs] Conversion done
Oct 18 15:08:41 localhost kube-apiserver: [56.325µs] [8.485µs] About to store object in database
Oct 18 15:08:41 localhost kube-apiserver: [802.193014ms] [802.136689ms] Object stored in database
Oct 18 15:08:41 localhost kube-apiserver: [802.195701ms] [2.687µs] Self-link added
Oct 18 15:08:41 localhost kube-apiserver: "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" [802.22197ms] [26.269µs] END
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.935447    8914 leaderelection.go:188] sucessfully acquired lease kube-system/kube-controller-manager
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.937099    8914 plugins.go:94] No cloud provider specified.
Oct 18 15:08:41 localhost kube-controller-manager: W1018 15:08:41.937126    8914 controllermanager.go:285] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
Oct 18 15:08:41 localhost kube-controller-manager: W1018 15:08:41.937152    8914 controllermanager.go:289] Unsuccessful parsing of service CIDR : invalid CIDR address:
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.937282    8914 nodecontroller.go:189] Sending events to api server.
Oct 18 15:08:41 localhost kube-controller-manager: E1018 15:08:41.937530    8914 controllermanager.go:305] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.937550    8914 controllermanager.go:322] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
Oct 18 15:08:41 localhost kube-controller-manager: E1018 15:08:41.937854    8914 util.go:45] Metric for replenishment_controller already registered
Oct 18 15:08:41 localhost kube-controller-manager: E1018 15:08:41.937863    8914 util.go:45] Metric for replenishment_controller already registered
Oct 18 15:08:41 localhost kube-controller-manager: E1018 15:08:41.937869    8914 util.go:45] Metric for replenishment_controller already registered
Oct 18 15:08:41 localhost kube-controller-manager: E1018 15:08:41.937886    8914 util.go:45] Metric for replenishment_controller already registered
Oct 18 15:08:41 localhost kube-controller-manager: E1018 15:08:41.937892    8914 util.go:45] Metric for replenishment_controller already registered
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.938060    8914 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"d6cd92e0-b3d2-11e7-9884-5254006db97b", APIVersion:"v1", ResourceVersion:"760", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' why-01 became leader
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.938150    8914 replication_controller.go:219] Starting RC Manager
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.964423    8914 controllermanager.go:403] Starting extensions/v1beta1 apis
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.964441    8914 controllermanager.go:406] Starting daemon set controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.964686    8914 controllermanager.go:413] Starting job controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.964998    8914 controllermanager.go:420] Starting deployment controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.965204    8914 controllermanager.go:427] Starting ReplicaSet controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.965394    8914 controllermanager.go:436] Attempting to start horizontal pod autoscaler controller, full resource map map[authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} apps/v1beta1:&APIResourceList{GroupVersion:ap
Oct 18 15:08:41 localhost kube-controller-manager: ps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {securitycontextconstraints false SecurityContextConstraints} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],}]
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.965530    8914 controllermanager.go:438] Starting autoscaling/v1 apis
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.965543    8914 controllermanager.go:440] Starting horizontal pod controller.
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.965666    8914 controllermanager.go:458] Attempting to start disruption controller, full resource map map[apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {securitycontextconstraints false SecurityContextConstraints} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} storage.k
Oct 18 15:08:41 localhost kube-controller-manager: 8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],}]
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.965771    8914 controllermanager.go:460] Starting policy/v1beta1 apis
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.965784    8914 controllermanager.go:462] Starting disruption controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.966013    8914 controllermanager.go:470] Attempting to start statefulset, full resource map map[extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.
Oct 18 15:08:41 localhost kube-controller-manager: authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {securitycontextconstraints false SecurityContextConstraints} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],}]
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.966114    8914 controllermanager.go:472] Starting apps/v1beta1 apis
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.966126    8914 controllermanager.go:474] Starting StatefulSet controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.966295    8914 controllermanager.go:499] Not starting batch/v2alpha1 apis
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.971458    8914 daemoncontroller.go:196] Starting Daemon Sets controller manager
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.971505    8914 deployment_controller.go:132] Starting deployment controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.971542    8914 replica_set.go:162] Starting ReplicaSet controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.971554    8914 horizontal.go:132] Starting HPA Controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.971761    8914 disruption.go:317] Starting disruption controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.971765    8914 disruption.go:319] Sending events to api server.
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.971809    8914 pet_set.go:146] Starting statefulset controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.980614    8914 controllermanager.go:544] Attempting to start certificates, full resource map map[authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {securitycontextconstraints false SecurityContextConstraints} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} autoscaling/v1:&AP
Oct 18 15:08:41 localhost kube-controller-manager: IResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],}]
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.980737    8914 controllermanager.go:546] Starting certificates.k8s.io/v1alpha1 apis
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.980748    8914 controllermanager.go:548] Starting certificate request controller
Oct 18 15:08:41 localhost kube-controller-manager: E1018 15:08:41.980895    8914 controllermanager.go:558] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.981912    8914 attach_detach_controller.go:235] Starting Attach Detach Controller
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.981952    8914 serviceaccounts_controller.go:120] Starting ServiceAccount controller
Oct 18 15:08:41 localhost kube-scheduler: I1018 15:08:41.985606    8924 leaderelection.go:188] sucessfully acquired lease kube-system/kube-scheduler
Oct 18 15:08:41 localhost kube-scheduler: I1018 15:08:41.985818    8924 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"d6cf5a87-b3d2-11e7-9884-5254006db97b", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' why-01 became leader
Oct 18 15:08:41 localhost kube-apiserver: I1018 15:08:41.985007    8904 trace.go:61] Trace "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started 2017-10-18 15:08:41.178050634 +0800 CST):
Oct 18 15:08:41 localhost kube-apiserver: [10.114µs] [10.114µs] About to convert to expected version
Oct 18 15:08:41 localhost kube-apiserver: [25.02µs] [14.906µs] Conversion done
Oct 18 15:08:41 localhost kube-apiserver: [27.8µs] [2.78µs] About to store object in database
Oct 18 15:08:41 localhost kube-apiserver: [806.904651ms] [806.876851ms] Object stored in database
Oct 18 15:08:41 localhost kube-apiserver: [806.909319ms] [4.668µs] Self-link added
Oct 18 15:08:41 localhost kube-apiserver: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" [806.931342ms] [22.023µs] END
Oct 18 15:08:41 localhost kube-controller-manager: I1018 15:08:41.997509    8914 garbagecollector.go:766] Garbage Collector: Initializing
Oct 18 15:08:42 localhost kube-controller-manager: E1018 15:08:42.038622    8914 actual_state_of_world.go:475] Failed to set statusUpdateNeeded to needed true because nodeName="why-02"  does not exist
Oct 18 15:08:42 localhost kube-controller-manager: I1018 15:08:42.043757    8914 nodecontroller.go:429] Initializing eviction metric for zone:
Oct 18 15:08:42 localhost kube-controller-manager: W1018 15:08:42.043780    8914 nodecontroller.go:678] Missing timestamp for Node why-02. Assuming now as a timestamp.
Oct 18 15:08:42 localhost kube-controller-manager: I1018 15:08:42.043822    8914 nodecontroller.go:608] NodeController detected that zone  is now in state Normal.
Oct 18 15:08:42 localhost kube-controller-manager: I1018 15:08:42.043958    8914 event.go:217] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"why-02", UID:"1debd0dd-b3d2-11e7-9884-5254006db97b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node why-02 event: Registered Node why-02 in NodeController
Oct 18 15:08:51 localhost kube-controller-manager: I1018 15:08:51.997996    8914 garbagecollector.go:780] Garbage Collector: All monitored resources synced. Proceeding to collect garbage

node节点

config

[root@why-02 ~]# vi /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://why-01:8080"
KUBE_ETCD_SERVERS="--etcd_servers=http://why-01:4001"

kubelet

[root@why-02 ~]# vi /etc/kubernetes/kubelet 
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=why-02"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://why-01:8080"

# pod infrastructure container
# KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

启动node节点服务

[root@why-02 ~]# systemctl start kube-proxy 
[root@why-02 ~]# systemctl start kubelet
[root@why-02 ~]# systemctl start docker
[root@why-02 ~]# ps -ef | grep kube
root     12959     1  0 15:08 ?        00:00:03 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://why-01:8080
root     13148     1  0 15:08 ?        00:00:03 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://why-01:8080 --address=0.0.0.0 --port=10250 --hostname-override=why-02 --allow-privileged=false --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest
root     24731 10849  0 15:25 pts/0    00:00:00 grep --color=auto kube

查看日志

[root@why-02 ~]# grep kube /var/log/messages
Oct 18 15:08:47 localhost kube-proxy: I1018 15:08:47.519935   12959 server.go:215] Using iptables Proxier.
Oct 18 15:08:47 localhost kube-proxy: W1018 15:08:47.520918   12959 proxier.go:253] clusterCIDR not specified, unable to distinguish between internal and external traffic
Oct 18 15:08:47 localhost kube-proxy: I1018 15:08:47.520932   12959 server.go:227] Tearing down userspace rules.
Oct 18 15:08:47 localhost kube-proxy: I1018 15:08:47.529777   12959 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Oct 18 15:08:47 localhost kube-proxy: I1018 15:08:47.530013   12959 conntrack.go:66] Setting conntrack hashsize to 32768
Oct 18 15:08:47 localhost kube-proxy: I1018 15:08:47.530205   12959 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Oct 18 15:08:47 localhost kube-proxy: I1018 15:08:47.530219   12959 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Oct 18 15:08:48 localhost kubelet: Flag --api-servers has been deprecated, Use --kubeconfig instead. Will be removed in a future version.
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.849102   13148 feature_gate.go:181] feature gates: map[]
Oct 18 15:08:48 localhost kubelet: W1018 15:08:48.849165   13148 server.go:605] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Using default client config instead.
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.849401   13148 docker.go:356] Connecting to docker on unix:///var/run/docker.sock
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.849411   13148 docker.go:376] Start docker client with request timeout=2m0s
Oct 18 15:08:48 localhost kubelet: E1018 15:08:48.851645   13148 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.854805   13148 manager.go:143] cAdvisor running in container: "/system.slice/kubelet.service"
Oct 18 15:08:48 localhost kubelet: W1018 15:08:48.865603   13148 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.873172   13148 fs.go:117] Filesystem partitions: map[/dev/vda1:{mountpoint:/ major:253 minor:1 fsType:ext3 blockSize:0}]
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.873814   13148 manager.go:198] Machine: {NumCores:1 CpuFrequency:2399996 MemoryCapacity:1040912384 MachineID:f9d400c5e1e8c3a8209e990d887d4ac1 SystemUUID:F2624563-5BE4-45D5-8AE4-AD1148FDE199 BootID:798f26bd-d7d3-4550-a6f3-1907c826aafe Filesystems:[{Device:/dev/vda1 Capacity:52709421056 Type:vfs Inodes:3276800 HasInodes:true}] DiskMap:map[253:0:{Name:vda Major:253 Minor:0 Size:53687091200 Scheduler:none} 252:0:{Name:dm-0 Major:252 Minor:0 Size:107374182400 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:52:54:00:55:0a:7f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:1073332224 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.875114   13148 manager.go:204] Version: {KernelVersion:3.10.0-514.26.2.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.12.6 CadvisorVersion: CadvisorRevision:}
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.876608   13148 kubelet.go:252] Watching apiserver
Oct 18 15:08:48 localhost kubelet: W1018 15:08:48.878988   13148 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.879005   13148 kubelet.go:477] Hairpin mode set to "hairpin-veth"
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.888578   13148 docker_manager.go:262] Setting dockerRoot to /var/lib/docker
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.888588   13148 docker_manager.go:265] Setting cgroupDriver to systemd
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.889599   13148 server.go:770] Started kubelet v1.5.2
Oct 18 15:08:48 localhost kubelet: E1018 15:08:48.889988   13148 kubelet.go:1146] Image garbage collection failed: unable to find data for container /
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.890479   13148 server.go:123] Starting to listen on 0.0.0.0:10250
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.899638   13148 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.899658   13148 status_manager.go:129] Starting to sync pod status with apiserver
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.899674   13148 kubelet.go:1715] Starting kubelet main sync loop.
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.899682   13148 kubelet.go:1726] skipping pod synchronization - [container runtime is down]
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.900034   13148 volume_manager.go:244] Starting Kubelet Volume Manager
Oct 18 15:08:48 localhost kubelet: E1018 15:08:48.922986   13148 factory.go:301] devicemapper filesystem stats will not be reported: unable to find thin_ls binary
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.922997   13148 factory.go:305] Registering Docker factory
Oct 18 15:08:48 localhost kubelet: W1018 15:08:48.923004   13148 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.923009   13148 factory.go:54] Registering systemd factory
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.923106   13148 factory.go:86] Registering Raw factory
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.923218   13148 manager.go:1106] Started watching for new ooms in manager
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.925665   13148 oomparser.go:185] oomparser using systemd
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.926093   13148 manager.go:288] Starting recovery of all containers
Oct 18 15:08:48 localhost kubelet: I1018 15:08:48.939773   13148 manager.go:293] Recovery completed
Oct 18 15:08:49 localhost kubelet: I1018 15:08:49.000189   13148 kubelet_node_status.go:227] Setting node annotation to enable volume controller attach/detach
Oct 18 15:08:49 localhost kubelet: I1018 15:08:49.001170   13148 kubelet_node_status.go:74] Attempting to register node why-02
Oct 18 15:08:49 localhost kubelet: I1018 15:08:49.006669   13148 kubelet_node_status.go:113] Node why-02 was previously registered
Oct 18 15:08:49 localhost kubelet: I1018 15:08:49.006683   13148 kubelet_node_status.go:77] Successfully registered node why-02


在why-03上服务启动后会有以下日志

Oct 18 15:27:07 localhost kube-controller-manager: E1018 15:27:07.470995    8914 actual_state_of_world.go:475] Failed to set statusUpdateNeeded to needed true because nodeName="why-03"  does not exist
Oct 18 15:27:12 localhost kube-controller-manager: W1018 15:27:12.286379    8914 nodecontroller.go:678] Missing timestamp for Node why-03. Assuming now as a timestamp.
Oct 18 15:27:12 localhost kube-controller-manager: I1018 15:27:12.286534    8914 event.go:217] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"why-03", UID:"bf29d7d9-b3d5-11e7-917c-5254006db97b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node why-03 event: Registered Node why-03 in NodeController
Oct 18 15:27:37 localhost kube-controller-manager: W1018 15:27:37.035036    8914 reflector.go:319] pkg/controller/garbagecollector/garbagecollector.go:768: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [1032/764]) [2031]

查看服务信息

[root@why-01 ~]# kubectl get nodes
NAME      STATUS    AGE
why-02    Ready     30m
why-03    Ready     4m
[root@why-01 ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

代表节点注册成功,系统安装正常

kubectl命令

[root@why-01 ~]# kubectl
kubectl controls the Kubernetes cluster manager. 

Find more information at https://github.com/kubernetes/kubernetes.

Basic Commands (Beginner):
  create         Create a resource by filename or stdin
  expose         Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run            Run a particular image on the cluster
  set            Set specific features on objects

Basic Commands (Intermediate):
  get            Display one or many resources
  explain        Documentation of resources
  edit           Edit a resource on the server
  delete         Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout        Manage a deployment rollout
  rolling-update Perform a rolling update of the given ReplicationController
  scale          Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
  autoscale      Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
  certificate    Modify certificate resources.
  cluster-info   Display cluster info
  top            Display Resource (CPU/Memory/Storage) usage
  cordon         Mark node as unschedulable
  uncordon       Mark node as schedulable
  drain          Drain node in preparation for maintenance
  taint          Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe       Show details of a specific resource or group of resources
  logs           Print the logs for a container in a pod
  attach         Attach to a running container
  exec           Execute a command in a container
  port-forward   Forward one or more local ports to a pod
  proxy          Run a proxy to the Kubernetes API server
  cp             Copy files and directories to and from containers.

Advanced Commands:
  apply          Apply a configuration to a resource by filename or stdin
  patch          Update field(s) of a resource using strategic merge patch
  replace        Replace a resource by filename or stdin
  convert        Convert config files between different API versions

Settings Commands:
  label          Update the labels on a resource
  annotate       Update the annotations on a resource
  completion     Output shell completion code for the given shell (bash or zsh)

Other Commands:
  api-versions   Print the supported API versions on the server, in the form of "group/version"
  config         Modify kubeconfig files
  help           Help about any command
  version        Print the client and server version information

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

资源管理更多的使用的是get,describe,create,replace,patch和delete

kubernetes的帮助描述还是很详细的

[root@why-01 ~]# kubectl describe --help
Show details of a specific resource or group of resources. This command joins many API calls together to form a detailed
description of a given resource or group of resources. 

  $ kubectl describe TYPE NAME_PREFIX

will first check for an exact match on TYPE and NAME PREFIX. If no such resource exists, it will output details for
every resource that has a name prefixed with NAME PREFIX. 

Valid resource types include: 

  * buildconfigs (aka 'bc')  
  * builds  
  * clusters (valid only for federation apiservers)  
  * componentstatuses (aka 'cs')  
  * configmaps (aka 'cm')  
  * daemonsets (aka 'ds')  
  * deployments (aka 'deploy')  
  * deploymentconfigs (aka 'dc')  
  * endpoints (aka 'ep')  
  * events (aka 'ev')  
  * horizontalpodautoscalers (aka 'hpa')  
  * imagestreamimages (aka 'isimage')  
  * imagestreams (aka 'is')  
  * imagestreamtags (aka 'istag')  
  * ingresses (aka 'ing')  
  * groups  
  * jobs  
  * limitranges (aka 'limits')  
  * namespaces (aka 'ns')  
  * networkpolicies  
  * nodes (aka 'no')  
  * persistentvolumeclaims (aka 'pvc')  
  * persistentvolumes (aka 'pv')  
  * pods (aka 'po')  
  * podsecuritypolicies (aka 'psp')  
  * podtemplates  
  * policies  
  * projects  
  * replicasets (aka 'rs')  
  * replicationcontrollers (aka 'rc')  
  * resourcequotas (aka 'quota')  
  * rolebindings  
  * routes  
  * secrets  
  * serviceaccounts (aka 'sa')  
  * services (aka 'svc')  
  * statefulsets  
  * users  
  * storageclasses  
  * thirdpartyresources

Examples:
  # Describe a node
  kubectl describe nodes kubernetes-node-emt8.c.myproject.internal

  # Describe a pod
  kubectl describe pods/nginx

  # Describe a pod identified by type and name in "pod.json"
  kubectl describe -f pod.json

  # Describe all pods
  kubectl describe pods

  # Describe pods by label name=myLabel
  kubectl describe po -l name=myLabel

  # Describe all pods managed by the 'frontend' replication controller (rc-created pods
  # get the name of the rc as a prefix in the pod the name).
  kubectl describe pods frontend

Options:
      --all-namespaces=false: If present, list the requested object(s) across all namespaces. Namespace in current
context is ignored even if specified with --namespace.
  -f, --filename=[]: Filename, directory, or URL to files containing the resource to describe
      --include-extended-apis=true: If true, include definitions of new APIs via calls to the API server. [default true]
  -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage
related manifests organized within the same directory.
  -l, --selector='': Selector (label query) to filter on
      --show-events=true: If true, display events related to the described object.

Usage:
  kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME) [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).

查看节点描述信息

[root@why-01 ~]# kubectl describe node why-02
Name:           why-02
Role:           
Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=why-02
Taints:         <none>
CreationTimestamp:  Wed, 18 Oct 2017 15:01:08 +0800
Phase:          
Conditions:
  Type          Status  LastHeartbeatTime           LastTransitionTime          Reason              Message
  ----          ------  -----------------           ------------------          ------              -------
  OutOfDisk         False   Wed, 18 Oct 2017 15:40:41 +0800     Wed, 18 Oct 2017 15:01:08 +0800     KubeletHasSufficientDisk    kubelet has sufficient disk space available
  MemoryPressure    False   Wed, 18 Oct 2017 15:40:41 +0800     Wed, 18 Oct 2017 15:01:08 +0800     KubeletHasSufficientMemory  kubelet has sufficient memory available
  DiskPressure      False   Wed, 18 Oct 2017 15:40:41 +0800     Wed, 18 Oct 2017 15:01:08 +0800     KubeletHasNoDiskPressure    kubelet has no disk pressure
  Ready         True    Wed, 18 Oct 2017 15:40:41 +0800     Wed, 18 Oct 2017 15:01:08 +0800     KubeletReady            kubelet is posting ready status
Addresses:      10.181.13.57,10.181.13.57,why-02
Capacity:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                   1
 memory:                1016516Ki
 pods:                  110
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                   1
 memory:                1016516Ki
 pods:                  110
System Info:
 Machine ID:            f9d400c5e1e8c3a8209e990d887d4ac1
 System UUID:           F2624563-5BE4-45D5-8AE4-AD1148FDE199
 Boot ID:           798f26bd-d7d3-4550-a6f3-1907c826aafe
 Kernel Version:        3.10.0-514.26.2.el7.x86_64
 OS Image:          CentOS Linux 7 (Core)
 Operating System:      linux
 Architecture:          amd64
 Container Runtime Version: docker://1.12.6
 Kubelet Version:       v1.5.2
 Kube-Proxy Version:        v1.5.2
ExternalID:         why-02
Non-terminated Pods:        (0 in total)
  Namespace         Name        CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------         ----        ------------    ----------  --------------- -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.
  CPU Requests  CPU Limits  Memory Requests Memory Limits
  ------------  ----------  --------------- -------------
  0 (0%)    0 (0%)      0 (0%)      0 (0%)
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  51m       39m     109 {kubelet why-02}            Normal      NodeHasSufficientDisk   Node why-02 status is now: NodeHasSufficientDisk
  51m       39m     109 {kubelet why-02}            Normal      NodeHasSufficientMemory Node why-02 status is now: NodeHasSufficientMemory
  51m       39m     109 {kubelet why-02}            Normal      NodeHasNoDiskPressure   Node why-02 status is now: NodeHasNoDiskPressure
  31m       31m     1   {kube-proxy why-02}         Normal      Starting        Starting kube-proxy.
  31m       31m     1   {kubelet why-02}            Normal      Starting        Starting kubelet.
  31m       31m     1   {kubelet why-02}            Warning     ImageGCFailed       unable to find data for container /
  31m       31m     1   {kubelet why-02}            Normal      NodeHasSufficientDisk   Node why-02 status is now: NodeHasSufficientDisk
  31m       31m     1   {kubelet why-02}            Normal      NodeHasSufficientMemory Node why-02 status is now: NodeHasSufficientMemory
  31m       31m     1   {kubelet why-02}            Normal      NodeHasNoDiskPressure   Node why-02 status is now: NodeHasNoDiskPressure
[root@why-01 ~]# kubectl get namespace
NAME          STATUS    AGE
default       Active    3h
kube-system   Active    3h

创建一个Pod

编写create文件

[root@why-01 ~]# vi /etc/k8s_yaml/nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports: 
    - containerPort: 80

创建Pod

[root@why-01 ~]# kubectl create -f /etc/k8s_yaml/nginx-pod.yaml
Error from server (ServerTimeout): error when creating "/etc/k8s_yaml/nginx-pod.yaml": No API token found for service account "default", retry after the token is automatically created and added to the service account

出现以上问题是因为ServiceAccount限制

[root@why-01 ~]# vi /etc/kubernetes/apiserver
去掉KUBE_ADMISSION_CONTROL中的ServiceAccount

重启apiserver

[root@why-01 ~]# systemctl restart kube-apiserver

再次创建

[root@why-01 ~]# kubectl create -f /etc/k8s_yaml/nginx-pod.yaml
pod "nginx" created
[root@why-01 ~]# kubectl get pods
NAME      READY     STATUS              RESTARTS   AGE
nginx     0/1       ContainerCreating   0          4m
[root@why-01 ~]# kubectl describe pods nginx
Name:       nginx
Namespace:  default
Node:       why-03/10.181.5.146
Start Time: Wed, 18 Oct 2017 16:10:16 +0800
Labels:     <none>
Status:     Pending
IP:     
Controllers:    <none>
Containers:
  nginx:
    Container ID:       
    Image:          nginx
    Image ID:           
    Port:           80/TCP
    State:          Waiting
      Reason:           ContainerCreating
    Ready:          False
    Restart Count:      0
    Volume Mounts:      <none>
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
No volumes.
QoS Class:  BestEffort
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  6m        6m      1   {default-scheduler }            Normal      Scheduled   Successfully assigned nginx to why-03
  5m        43s     5   {kubelet why-03}            Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause-amd64:3.0, this may be because there are no credentials on this request.  details: (Get https://gcr.io/google_containers/pause-amd64:3.0 dial tcp 23.42.185.234:443: i/o timeout)"

  4m    15s 12  {kubelet why-03}        Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause-amd64:3.0\""



image pull failed for gcr.io/google_containers/pause-amd64:3.0, this may be because there are no credentials on this request.

可以看到是去下载墙外google的镜像失败导致的

我使用了一台香港的阿里云主机,通过翻墙把镜像pull下来,save到本地,传到k8s的机器load一下就行

pull镜像并save到本地

[root@why 17:09:32 ~]$docker pull gcr.io/google_containers/pause-amd64:3.0
3.0: Pulling from gcr.io/google_containers/pause-amd64
bdb43c586e88: Pull complete 
f8e2eec424cf: Pull complete 
bb497e16a2d5: Pull complete 
Digest: sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516
Status: Downloaded newer image for gcr.io/google_containers/pause-amd64:3.0
[root@why 17:10:33 ~]$docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
mysql                                  latest              7ec6f7dd349c        5 weeks ago         412.3 MB
web1.0                                 latest              2f58e99fcd86        9 weeks ago         584.6 MB
centos                                 latest              f3b88ddaed16        10 weeks ago        192.5 MB
gcr.io/google_containers/pause-amd64   3.0                 bb497e16a2d5        17 months ago       746.9 kB
[root@why 18:06:59 ~]$docker save -o pause-amd64.tar gcr.io/google_containers/pause-amd64:3.0

k8s node节点载入镜像

[root@why-02 ~]# docker load --input pause-amd64.tar

然后就能看到创建成功了,因为是第一次pull镜像,需要较长的时间

因为网络问题,pulling镜像时间很长,我认为是服务问题就停掉重新创建了

[root@why-01 ~]# kubectl create -f /etc/k8s_yaml/nginx-pod.yaml
pod "nginx" created

[root@why-01 ~]# kubectl describe pods nginx
Name:       nginx
Namespace:  default
Node:       why-02/10.181.13.57
Start Time: Wed, 18 Oct 2017 19:11:20 +0800
Labels:     <none>
Status:     Running
IP:     172.17.0.2
Controllers:    <none>
Containers:
  nginx:
    Container ID:       docker://2741c4c1d52dd55e9e5d8204ba1738140bd37aa6c0cb8a11f2a56821d54c4824
    Image:          nginx
    Image ID:           docker-pullable://docker.io/nginx@sha256:004ac1d5e791e705f12a17c80d7bb1e8f7f01aa7dca7deee6e65a03465392072
    Port:           80/TCP
    State:          Running
      Started:          Wed, 18 Oct 2017 19:11:26 +0800
    Ready:          True
    Restart Count:      0
    Volume Mounts:      <none>
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True 
  Ready     True 
  PodScheduled  True 
No volumes.
QoS Class:  BestEffort
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath       Type        Reason          Message
  --------- --------    -----   ----            -------------       --------    ------          -------
  16s       16s     1   {default-scheduler }                Normal      Scheduled       Successfully assigned nginx to why-02
  15s       15s     1   {kubelet why-02}    spec.containers{nginx}  Normal      Pulling         pulling image "nginx"
  16s       10s     2   {kubelet why-02}                Warning     MissingClusterDNS   kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  10s       10s     1   {kubelet why-02}    spec.containers{nginx}  Normal      Pulled          Successfully pulled image "nginx"
  10s       10s     1   {kubelet why-02}    spec.containers{nginx}  Normal      Created         Created container with docker id 2741c4c1d52d; Security:[seccomp=unconfined]
  10s       10s     1   {kubelet why-02}    spec.containers{nginx}  Normal      Started         Started container with docker id 2741c4c1d52d

[root@why-01 ~]# kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
nginx     1/1       Running   0          44s

可以看到node节点上多出来的镜像

[root@why-02 ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/pause-amd64   3.0                 d8e4ede19139        17 months ago       746.9 kB
[root@why-02 ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
docker.io/nginx                        latest              1e5ab59102ce        8 days ago          108.3 MB
gcr.io/google_containers/pause-amd64   3.0                 d8e4ede19139        17 months ago       746.9 kB