HA kubernetes集群和etcd集群

时间:March 8, 2018 分类:

目录:

环境

主机名 IP 服务
node1 172.31.51.208 master,node,etcd
node2 172.31.51.209 master,node,etcd
node3 172.31.51.210 node,etcd

网络方案这里采用性能比较好的Calico,集群开启RBAC,RBAC相关可参考

更多可以参考https://github.com/rootsongjc/kubernetes-handbook

证书

证书说明

由于Etcd和Kubernetes全部采用TLS通讯,所以先要生成TLS证书,证书生成工具采用 cfssl,具体使用方法这里不再详细阐述,生成证书时可在任一节点完成,这里在宿主机执行,证书列表如下

证书名称 配置文件 用途
etcd-root-ca.pem etcd-root-ca-csr.json etcd 根 CA 证书
etcd.pem etcd-gencert.json、etcd-csr.json etcd 集群证书
k8s-root-ca.pem k8s-root-ca-csr.json k8s 根 CA 证书
kube-proxy.pem k8s-gencert.json、kube-proxy-csr.json kube-proxy 使用的证书
admin.pem k8s-gencert.json、admin-csr.json kubectl 使用的证书
kubernetes.pem k8s-gencert.json、kubernetes-csr.json kube-apiserver 使用的证书

CFSSL工具安装

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

生成Etcd证书

  • etcd-csr.json
{
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "O": "etcd",
      "OU": "etcd Security",
      "L": "Beijing",
      "ST": "Beijing",
      "C": "CN"
    }
  ],
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "localhost",
    "172.31.51.208",
    "172.31.51.209",
    "172.31.51.210"
  ]
}
  • etcd-root-ca-csr.json
{
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "O": "etcd",
      "OU": "etcd Security",
      "L": "Beijing",
      "ST": "Beijing",
      "C": "CN"
    }
  ],
  "CN": "etcd-root-ca"
}
  • etcd-gencert.json
{
  "signing": {
    "default": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
        "expiry": "87600h"
    }
  }
}

生成证书命令

cfssl gencert --initca=true etcd-root-ca-csr.json | cfssljson --bare etcd-root-ca
cfssl gencert --ca etcd-root-ca.pem --ca-key etcd-root-ca-key.pem --config etcd-gencert.json etcd-csr.json | cfssljson --bare etcd

参考生成

[root@why 16:08:05 etcd_ca]#cfssl gencert --initca=true etcd-root-ca-csr.json | cfssljson --bare etcd-root-ca
2018/02/23 16:08:11 [INFO] generating a new CA key and certificate from CSR
2018/02/23 16:08:11 [INFO] generate received request
2018/02/23 16:08:11 [INFO] received CSR
2018/02/23 16:08:11 [INFO] generating key: rsa-4096
2018/02/23 16:08:14 [INFO] encoded CSR
2018/02/23 16:08:14 [INFO] signed certificate with serial number 221368461073067543843602439859241279606478450580
[root@why 16:10:31 etcd_ca]#cfssl gencert --ca etcd-root-ca.pem --ca-key etcd-root-ca-key.pem --config etcd-gencert.json etcd-csr.json | cfssljson --bare etcd
2018/02/23 16:10:34 [INFO] generate received request
2018/02/23 16:10:34 [INFO] received CSR
2018/02/23 16:10:34 [INFO] generating key: rsa-2048
2018/02/23 16:10:34 [INFO] encoded CSR
2018/02/23 16:10:34 [INFO] signed certificate with serial number 628159429515825978103169788945772280225557715049
2018/02/23 16:10:34 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@why 16:10:34 etcd_ca]#ls
etcd.csr  etcd-csr.json  etcd-gencert.json  etcd-key.pem  etcd.pem  etcd-root-ca.csr  etcd-root-ca-csr.json  etcd-root-ca-key.pem  etcd-root-ca.pem

生成Kubernetes证书

  • k8s-root-ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
  • admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
  • k8s-gencert.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
  • kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
  • kubernetes-csr.json
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "10.254.0.1",
        "172.31.51.208",
        "172.31.51.209",
        "172.31.51.210",
        "localhost",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

生成kubernetes证书

cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca
for targetName in kubernetes admin kube-proxy; do cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName ;done

生成过程输出

[root@why 22:56:34 k8s_ca]#cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca
2018/02/23 22:58:14 [INFO] generating a new CA key and certificate from CSR
2018/02/23 22:58:14 [INFO] generate received request
2018/02/23 22:58:14 [INFO] received CSR
2018/02/23 22:58:14 [INFO] generating key: rsa-4096
2018/02/23 22:58:23 [INFO] encoded CSR
2018/02/23 22:58:23 [INFO] signed certificate with serial number 468120826038099459107354459007221811133077222762
[root@why 22:58:23 k8s_ca]#for targetName in kubernetes admin kube-proxy; do cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName ;done
2018/02/23 22:59:00 [INFO] generate received request
2018/02/23 22:59:00 [INFO] received CSR
2018/02/23 22:59:00 [INFO] generating key: rsa-2048
2018/02/23 22:59:00 [INFO] encoded CSR
2018/02/23 22:59:00 [INFO] signed certificate with serial number 10196728749474450168959561825915194450071084855
2018/02/23 22:59:00 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2018/02/23 22:59:00 [INFO] generate received request
2018/02/23 22:59:00 [INFO] received CSR
2018/02/23 22:59:00 [INFO] generating key: rsa-2048
2018/02/23 22:59:00 [INFO] encoded CSR
2018/02/23 22:59:00 [INFO] signed certificate with serial number 207211650669456652001713861841218241215735992963
2018/02/23 22:59:00 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2018/02/23 22:59:00 [INFO] generate received request
2018/02/23 22:59:00 [INFO] received CSR
2018/02/23 22:59:00 [INFO] generating key: rsa-2048
2018/02/23 22:59:01 [INFO] encoded CSR
2018/02/23 22:59:01 [INFO] signed certificate with serial number 547171933871880415848734616889872168559307791694
2018/02/23 22:59:01 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

生成配置

生成token

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

创建 kubelet bootstrapping kubeconfig 配置(需要提前安装 kubectl 命令),对于 node 节点,api server 地址为本地 nginx 监听的 127.0.0.1:6443,如果想把 master 也当做 node 使用,那么 master 上 api server 地址应该为 masterIP:6443,因为在 master 上没必要也无法启动 nginx 来监听 127.0.0.1:6443(6443 已经被 master 上的 api server 占用了)

所以以下配置只适合 node 节点,如果想把 master 也当做 node,那么需要重新生成下面的 kubeconfig 配置,并把 api server 地址修改为当前 master 的 api server 地址

生成配置

export KUBE_APISERVER="https://127.0.0.1:6443"
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
echo "Tokne: ${BOOTSTRAP_TOKEN}"

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

echo "Create kubelet bootstrapping kubeconfig..."
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

echo "Create kube-proxy kubeconfig..."
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

# 生成高级审计配置
cat >> audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF

安装etcd集群

分发证书

以下是本机操作方式,远程主机同理

mkdir /etc/etcd/ssl
cp *.pem /etc/etcd/ssl
chown -R etcd:etcd /etc/etcd/ssl
chmod -R 644 /etc/etcd/ssl/*
chmod 755 /etc/etcd/ssl
chown -R etcd:etcd /var/lib/etcd

配置etcd

配置文件为/etc/etcd/etcd.conf

需要修改的配置为ETCD_NAME,url中的IP地址,注意是https,其他节点同理

# [member]
ETCD_NAME=etcd1
ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd"
ETCD_WAL_DIR="/var/lib/etcd/wal"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://172.31.51.208:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.31.51.208:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
#ETCD_CORS=""

# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.31.51.208:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd1=https://172.31.51.208:2380,etcd2=https://172.31.51.209:2380,etcd3=https://172.31.51.210:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.31.51.208:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"

# [proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"

# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_PEER_AUTO_TLS="true"

# [logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""

启动etcd

etcd下载可以直接yum,或者从rpmfind网站获取

yum install -y etcd
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

再初建的过程中,如果有问题,建议关闭etcd后,修改配置文件,然后删除/var/lib/etcd/下目录再启动

检查节点

[root@node1 etcd]# export ETCDCTL_API=3
[root@node1 etcd]# etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://172.31.51.210:2379,https://172.31.51.208:2379,https://172.31.51.209:2379 endpoint health
https://172.31.51.208:2379 is healthy: successfully committed proposal: took = 1.572735ms
https://172.31.51.209:2379 is healthy: successfully committed proposal: took = 1.90666ms
https://172.31.51.210:2379 is healthy: successfully committed proposal: took = 3.088488ms

部署 HA Master

kubernetes的HA是API server的HA,其他服务都是controller-manager都可以通过etcd做选举,而API server是用于接收请求,而对于node如何请求HA的API server,可以通过VIP的方式或者nginx的upsteam代理方式。

本示例通过nginx的方式,node中的master port配置为本地nginx端口,nginx会自动发现后端主机中无法连接的api server剔除。

分发并安装rpm并安装

rpm -ivh kubernetes-*

分发证书

mkdir /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl/
cp *.kubeconfig token.csv audit-policy.yaml /etc/kubernetes
chown -R kube:kube /etc/kubernetes/ssl

设置 log 目录权限

mkdir -p /var/log/kube-audit /usr/libexec/kubernetes
chown -R kube:kube /var/log/kube-audit /usr/libexec/kubernetes
chmod -R 755 /var/log/kube-audit /usr/libexec/kubernetes

编辑master配置

/etc/kubernetes/config 配置

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"

/etc/kubernetes/apiserver 配置

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=172.31.51.208 --insecure-bind-address=127.0.0.1 --bind-address=172.31.51.208"

# The port on the local server to listen on.
KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://172.31.51.208:2379,https://172.31.51.209:2379,https://172.31.51.210:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction"

# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC,Node \
               --anonymous-auth=false \
               --kubelet-https=true \
               --enable-bootstrap-token-auth \
               --token-auth-file=/etc/kubernetes/token.csv \
               --service-node-port-range=30000-50000 \
               --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
               --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
               --client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
               --service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
               --etcd-quorum-read=true \
               --storage-backend=etcd3 \
               --etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \
               --etcd-certfile=/etc/etcd/ssl/etcd.pem \
               --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
               --enable-swagger-ui=true \
               --apiserver-count=3 \
               --audit-policy-file=/etc/kubernetes/audit-policy.yaml \
               --audit-log-maxage=30 \
               --audit-log-maxbackup=3 \
               --audit-log-maxsize=100 \
               --audit-log-path=/var/log/kube-audit/audit.log \
               --event-ttl=1h"
  • 在1.8版本,RBAC已经稳定,被纳入了v1 api,不再需要指定开启,在1.7版本需要加入--runtime-config=rbac.authorization.k8s.io/v1beta1
  • --authorization-mode 授权模型增加了 Node 参数,因为1.8后默认system:node的role不会自动授予system:nodes组,参考CHANGELOG(before-upgrading 段最后一条说明)
  • --audit-policy-file 参数用于指定高级审计配置,具体可参考 CHANGELOG(before-upgrading 第四条)、Advanced audit
  • --admission-control 同时增加了 NodeRestriction 参数,关于关于节点授权器请参考Using Node Authorization
  • --enable-bootstrap-token-auth,在1.7版本为--experimental-bootstrap-token-auth,详情参考 CHANGELOG(Auth 第二条)

/etc/kubernetes/controller-manager 配置

###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \
                              --service-cluster-ip-range=10.254.0.0/16 \
                              --cluster-name=kubernetes \
                              --cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
                              --cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
                              --experimental-cluster-signing-duration=87600h0m0s \
                              --service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
                              --root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
                              --leader-elect=true \
                              --node-monitor-grace-period=40s \
                              --node-monitor-period=5s \
                              --pod-eviction-timeout=5m0s"

/etc/kubernetes/scheduler 配置

###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"

启动master节点

systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
[root@node1 k8s]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   

配置node节点

创建nginx代理

# 创建配置目录
mkdir -p /etc/nginx

# 写入代理配置
cat << EOF >> /etc/nginx/nginx.conf
error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}

stream {
    upstream kube_apiserver {
        least_conn;
        server 10.10.1.5:6443;
        server 10.10.1.6:6443;
        server 10.10.1.7:6443;
    }

    server {
        listen        0.0.0.0:6443;
        proxy_pass    kube_apiserver;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
}
EOF

# 更新权限
chmod +r /etc/nginx/nginx.conf

将nginx配置为server

cat << EOF >> /etc/systemd/system/nginx-proxy.service
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\
                              -v /etc/nginx:/etc/nginx \\
                              --name nginx-proxy \\
                              --net=host \\
                              --restart=on-failure:5 \\
                              --memory=512M \\
                              nginx:1.13.5-alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s

[Install]
WantedBy=multi-user.target
EOF

启动代理

systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy

node节点向master认证

由于我们采用了TLS Bootstrapping,所以需要先创建一个 ClusterRoleBinding

# 在任意 master 执行即可
kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

启动kubelet

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

由于采用了 TLS Bootstrapping,所以 kubelet 启动后不会立即加入集群,而是进行证书申请,从日志中可以看到如下输出

Feb 24 01:00:59 node3 kubelet: I0224 01:00:59.788338   11632 bootstrap.go:57] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file

在 master 允许其证书申请即可

kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

查看node

[root@node1 etcd]# kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     <none>    2m        v1.8.0
node2     Ready     <none>    1m        v1.8.0
node3     Ready     <none>    9h        v1.8.0

启动 kube-proxy

systemctl start kube-proxy
systemctl enable kube-proxy

如果master作为node

bootstrap.kubeconfig、kube-proxy.kubeconfig中的API Server 地址改成当前Master IP即可

calico

配置calico

wget https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/calico.yaml
sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://172.31.51.208:2379,https://172.31.51.209:2379,https://172.31.51.210:2379\"@gi' calico.yaml
export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d '\n'`
sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml

sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml
# sed -i '106,199s@.*@#&@gi' calico.yaml

修改 kubelet 配置

kubelet配置必须增加 --network-plugin=cni

创建Calico Daemonset

先创建 RBAC

kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/rbac.yaml

再创建 Calico Daemonset

kubectl create -f calico.yaml

启动

systemctl start kubelet
systemctl start kube-proxy

然后测试一下跨主机通信即可

部署DNS

部署

wget https://github.com/kubernetes/kubernetes/releases/download/v1.8.0/kubernetes.tar.gz
cd /tmp/kubernetes/cluster/addons/dns
cp kubedns-cm.yaml /etc/kubernetes/dns/
cp kubedns-sa.yaml /etc/kubernetes/dns/
cp kubedns-svc.yaml.sed /etc/kubernetes/dns/kubedns-svc.yaml
cp kubedns-controller.yaml.sed /etc/kubernetes/dns/kubedns-controller.yaml
cd /etc/kubernetes/dns/
sed -i 's/$DNS_DOMAIN/cluster.local/gi' kubedns-controller.yaml
sed -i 's/$DNS_SERVER_IP/10.254.0.2/gi' kubedns-svc.yaml
kubectl create -f ../dns

(参考)部署DNS自动扩容部署

/tmp/kubernetes/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
kubectl create -f dns-horizontal-autoscaler.yaml

自动扩容参考

测试DNS

[root@node1 k8syaml]# kubectl create -f busybox-pod.yaml
[root@node1 k8syaml]# kubectl exec -it busybox bash
/ # nslookup frontend
Server:    10.254.0.2
Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.local

Name:      frontend
Address 1: 10.254.62.131 frontend.default.svc.cluster.local

nginx-ingress

https://github.com/kubernetes/ingress-nginx/tree/master/deploy

[root@node1 tmp]# git clone https://github.com/kubernetes/ingress-nginx.git
[root@node1 tmp]# cd ingress-nginx/deploy
[root@node1 deploy]# rm -f without-rbac.yaml


# 修改with-rbac.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io


注意这里加了hostNetwork: true

然后根据yaml创建即可,publish-service-patch.yaml这个文件暂时没搞懂

创建配置文件guestbook-ingress.yaml


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: guestbook-test
spec:
  rules:
  - host: test.whysdomain.com
    http:
      paths:
      - backend:
          serviceName: frontend
          servicePort: 80
#  - host: kibana.mritd.me
#    http:
#      paths:
#      - backend:
#          serviceName: kibana-logging
#          servicePort: 5601

如果不是默认空间,可以metadata中加入namespace: kube-system

示例

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard-test
  namespace: kube-system
spec:
  rules:
  - host: k8s.whysdomain.com
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443

查询ingress

[root@node1 ingress-role]# kubectl get ing
NAME             HOSTS                 ADDRESS   PORTS     AGE
guestbook-test   test.whysdomain.com             80        5m

配置nginx-ingress参考

可以到ingress容器中看一下详细的nginx配置文件

[root@node1 ingress-role]# kubectl exec nginx-ingress-controller-5f77699b59-qkkg2 -it  -n ingress-nginx bash
root@node1:/# cat /etc/nginx/nginx.conf

看一下ingress中的nginx配置,创建的ingress实质上就是在nginx-ingress中生成对应的配置文件,实现了七层的转发。

dashboard

对于1.8版本,官网上给出的添加dashborad的方法是

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

这边我是通过nodePort方式进行访问的,也可以通过proxy代理的方式实现

[root@node1 ingress-role]# kubectl logs -f kubernetes-dashboard-55b445dbd6-n8gmd -n kube-system
2018/03/04 14:30:08 Using in-cluster config to connect to apiserver
2018/03/04 14:30:08 Using service account token for csrf signing
2018/03/04 14:30:08 No request provided. Skipping authorization
2018/03/04 14:30:08 Starting overwatch
2018/03/04 14:30:08 Successful initial request to the apiserver, version: v1.8.0
2018/03/04 14:30:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2018/03/04 14:30:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/03/04 14:30:08 Initializing secret synchronizer synchronously using secret kubernetes-dashboard-key-holder from namespace kube-system
2018/03/04 14:30:08 Initializing JWE encryption key from synchronized object
2018/03/04 14:30:08 Creating in-cluster Heapster client
2018/03/04 14:30:08 Serving securely on HTTPS port: 8443

访问的时候一定要使用https的方式,如果前端还有代理也要指向https的方式实现。

  • http://k8s.whysdomain.com/#!/login

关于登陆等还是没有研究透彻,可以参考