不做大哥好多年 不做大哥好多年
首页
  • MySQL
  • Redis
  • Elasticsearch
  • Kafka
  • Etcd
  • MongoDB
  • TiDB
  • RabbitMQ
  • 01.GO基础
  • 02.面向对象
  • 03.并发编程
  • 04.常用库
  • 05.数据库操作
  • 06.Beego框架
  • 07.Beego商城
  • 08.GIN框架
  • 09.GIN论坛
  • 10.微服务
  • 01.Python基础
  • 02.Python模块
  • 03.Django
  • 04.Flask
  • 05.SYL
  • 06.Celery
  • 10.微服务
  • 01.Java基础
  • 02.面向对象
  • 03.Java进阶
  • 04.Web基础
  • 05.Spring框架
  • 100.微服务
  • Docker
  • K8S
  • 容器原理
  • Istio
  • 数据结构
  • 算法基础
  • 算法题分类
  • 前置知识
  • PyTorch
  • 01.Python
  • 02.GO
  • 03.Java
  • 04.业务问题
  • 05.关键技术
  • 06.项目常识
  • 10.计算机基础
  • Linux基础
  • Linux高级
  • Nginx
  • KeepAlive
  • ansible
  • zabbix
  • Shell
  • Linux内核

逍遥子

不做大哥好多年
首页
  • MySQL
  • Redis
  • Elasticsearch
  • Kafka
  • Etcd
  • MongoDB
  • TiDB
  • RabbitMQ
  • 01.GO基础
  • 02.面向对象
  • 03.并发编程
  • 04.常用库
  • 05.数据库操作
  • 06.Beego框架
  • 07.Beego商城
  • 08.GIN框架
  • 09.GIN论坛
  • 10.微服务
  • 01.Python基础
  • 02.Python模块
  • 03.Django
  • 04.Flask
  • 05.SYL
  • 06.Celery
  • 10.微服务
  • 01.Java基础
  • 02.面向对象
  • 03.Java进阶
  • 04.Web基础
  • 05.Spring框架
  • 100.微服务
  • Docker
  • K8S
  • 容器原理
  • Istio
  • 数据结构
  • 算法基础
  • 算法题分类
  • 前置知识
  • PyTorch
  • 01.Python
  • 02.GO
  • 03.Java
  • 04.业务问题
  • 05.关键技术
  • 06.项目常识
  • 10.计算机基础
  • Linux基础
  • Linux高级
  • Nginx
  • KeepAlive
  • ansible
  • zabbix
  • Shell
  • Linux内核
  • Docker

  • K8S

    • 01_Kubernetes概述

    • 02_二进制搭建K8s集群

      • 01.k8s集群架构两种模式
      • 02.初始化集群操作系统
      • 03.三台机器部署单master集群
      • 04.部署WebUI和DNS工具
      • 05.多k8s-master架构
        • 2.1 在k8s-master1拷贝/opt/配置文件到k8s-master2
        • 2.2 在k8s-master2中仅需要修改kube-apiserver.conf地址
        • 2.3 启动服务并设置开机自启动
        • 3.1 安装nginx
          • 3.1.1 获取niginx rpm包
          • 3.1.2 修改nginx.conf配置文件
          • 3.3.3 启动nginx
        • 4.1 keepalive安装与配置
          • 4.1 安装keepalive
          • 4.2 keepalive-master配置修改
          • 4.3 keepalive-slave配置修改
          • 4.4 添加nginx健康监测脚本
          • 4.5 启动keepalive
        • 4.2 测试keepalive高可用
          • 4.2.1 在lb-master上可以看到已经绑定了虚拟ip
        • 5.1 测试VIP是否正常工作
        • 5.2 将所有k8s-node节点修改成VIP
        • 5.3 检查集群是否正常
    • 03_kubeadm部署K8s

    • 04_Kubectl命令行管理工具

    • 05_资源编排(YAML)

    • 06_Pod对象管理

    • 07_Pod对象调度

    • 08_常用负载控制器

    • 09_Service统一入口访问

    • 10_Ingress对外暴露应用

    • 11_项目中实际应用K8s

  • 容器原理

  • Istio

  • 容器
  • K8S
  • 02_二进制搭建K8s集群
xiaonaiqiang
2021-02-12
目录

05.多k8s-master架构

# 01.多k8s-master架构

角色 IP 组件
k8s-master1 192.168.56.63 kube-apiserver
kube-controller-manager
kube-scheduler
etcd
k8s-master2 192.168.56.64 kube-apiserver
kube-controller-manager
kube-scheduler
k8s-node1 192.168.56.65 kubelet
kube-proxy
docker
etcd
k8s-node2 192.168.56.66 kubelet
kube-proxy
docker
etcd
Load Balancer(Master) 192.168.31.61
==192.168.56.60(VIP)==
Nginx L4
Load Balancer(Backup) 192.168.31.62 Nginx L4

# 02.k8s-master2配置

# 2.1 在k8s-master1拷贝/opt/配置文件到k8s-master2

==k8s-master1和k8s-maser2配置基本一致,直接拷贝过去即可==

'''1.讲k8s-master1中的配置文件和证书拷贝到master2   '''
[root@k8s-master1 ~]# scp -r /opt/kubernetes/ root@192.168.56.64:/opt/
'''2.由于k8s-master2也要连接etcd所以要讲k8s-master1证书拷贝过去'''
[root@k8s-master1 opt]# scp -r /opt/etcd/ root@192.168.56.64:/opt/etcd    
'''3.把管理服务的 .service 文件拷贝过去 '''
[root@k8s-master1 opt]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.56.64:/usr/lib/systemd/system
'''4.将kubectl命令拷贝到k8s-master2中 '''
[root@k8s-master1 opt]# scp /usr/local/bin/kubectl root@192.168.56.64:/usr/local/bin/kubectl 
1
2
3
4
5
6
7
8

# 2.2 在k8s-master2中仅需要修改kube-apiserver.conf地址

[root@k8s-master2 opt]# vim /opt/kubernetes/cfg/kube-apiserver.conf 
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.56.63:2379,https://192.168.56.65:2379,https://192.168.56.66:2379 \
--bind-address=192.168.56.64 \           # 绑定地址修改成 192.168.56.64
--secure-port=6443 \
--advertise-address=192.168.56.64 \        # 通告地址修改成 192.168.56.64
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

# 2.3 启动服务并设置开机自启动

'''1.启动服务 '''
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
'''2.设置服务开机自启动 '''
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
'''3.查看服务启动状态  '''
[root@k8s-master1 opt]# tail -f /opt/kubernetes/logs/*    # 可以看日志
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl stauts kube-scheduler
1
2
3
4
5
6
7
8
9
10
11
12
13
  • 确定k8s-master2服务正常
[root@k8s-master1 opt]# kubectl get node
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    <none>   2d4h   v1.16.0
k8s-node2   Ready    <none>   2d3h   v1.16.0
1
2
3
4

# 03.部署Nginx负载均衡

# 3.1 安装nginx

# 3.1.1 获取niginx rpm包

  • nginx rpm包地址:http://nginx.org/packages/rhel/7/x86_64/RPMS/

  • 在192.168.56.61和192.168.56.62中安装nginx
[root@lb-slave ~]# rpm -ivh http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm
1

# 3.1.2 修改nginx.conf配置文件

[root@lb-master ~]# vim /etc/nginx/nginx.conf   # 添加一个stream配置即可
stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
    upstream k8s-apiserver {
                server 192.168.56.63:6443;
                server 192.168.56.64:6443;
            }
   
    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
  • 完整配置如下(看一下即可,都是nginx默认配置)
[root@lb-master ~]# cat /etc/nginx/nginx.conf 
user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}

stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
    upstream k8s-apiserver {
                server 192.168.56.63:6443;
                server 192.168.56.64:6443;
            }  
    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    #gzip  on;
    include /etc/nginx/conf.d/*.conf;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

# 3.3.3 启动nginx

[root@lb-slave ~]# systemctl start nginx
[root@lb-slave ~]# systemctl enable nginx
1
2

# 04.keepalive高可用

# 4.1 keepalive安装与配置

# 4.1 安装keepalive

[root@lb-master ~]# yum -y install keepalived
1

# 4.2 keepalive-master配置修改

  • ==注:默认的keepalived.conf配置太多,直接删除,用下面配置替换即可==
[root@lb-master ~]# vim /etc/keepalived/keepalived.conf 

global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

# 健康监测nginx
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"      # 设置检查nginx是否存活的脚本位置
}

# keepalive热备组配置
vrrp_instance VI_1 { 
    state MASTER            # 配置当前keepalive为master角色
    interface eth0           # 网卡名字,必须和实际网卡名称一致,默认是ens33
    virtual_router_id 51       # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100            # 优先级,备服务器设置 90 
    advert_int 1            # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111       # 设置keepalive的master和slave之间简单的秘钥认证
    }  
    virtual_ipaddress { 
        192.168.56.60/24     # 设置keepalive虚拟IP地址
    } 
    track_script {
        check_nginx
    } 
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

# 4.3 keepalive-slave配置修改

==注:默认的keepalived.conf配置太多,直接删除,用下面配置替换即可==

[root@lb-master ~]# vim /etc/keepalived/keepalived.conf 
 
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_BACKUP
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP            # 设置keepalive角色为backend
    interface eth0           # 设置虚拟网卡名称
    virtual_router_id 51       # VRRP 路由 ID实例,每个实例是唯一的,master和slave保持一致即可
    priority 90             # 优先级,备服务器设置 90 
    advert_int 1            # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.56.60/24     # 设置虚拟网卡地址
    } 
    track_script {
        check_nginx
    } 
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

# 4.4 添加nginx健康监测脚本

[root@lb-master keepalived]# vim /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
[root@lb-master keepalived]# chmod +x check_nginx.sh           # 给脚本添加一个可执行权限
1
2
3
4
5
6
7
8
9
10

# 4.5 启动keepalive

[root@lb-master ~]#  systemctl start keepalived
[root@lb-master ~]#  systemctl enable keepalived
1
2

# 4.2 测试keepalive高可用

# 4.2.1 在lb-master上可以看到已经绑定了虚拟ip

  • 在浏览器中访问: 192.168.56.60 192.168.56.61 192.168.56.61 都可以访问nginx默认页

  • 关闭lb-master的nginx测试VIP是否漂移到备节点

# 05.修改Node连接VIP

# 5.1 测试VIP是否正常工作

==理论讲在任意k8s-master节点和 k8s-node节点都可以通过VIP访问k8s-apiserver服务==

[root@k8s-node2 logs]# curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://192.168.56.60:6443/version
{
  "major": "1",
  "minor": "16",
  "gitVersion": "v1.16.0",
  "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",
  "gitTreeState": "clean",
  "buildDate": "2019-09-18T14:27:17Z",
  "goVersion": "go1.12.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}
1
2
3
4
5
6
7
8
9
10
11
12

# 5.2 将所有k8s-node节点修改成VIP

'''1.查看哪些配置文件配置了 192.168.56.63 作为apiserver  '''
[root@k8s-node1 cfg]# cd /opt/kubernetes/cfg/
[root@k8s-node1 cfg]# grep 192 *
bootstrap.kubeconfig:     server: https://192.168.56.63:6443
kubelet.kubeconfig:      server: https://192.168.56.63:6443
kube-proxy.kubeconfig:    server: https://192.168.56.63:6443
'''2.批量修改  '''
[root@k8s-node1 cfg]# sed -i 's#192.168.56.63#192.168.56.60#g' *
'''3.确认是否修改成功 '''
[root@k8s-node1 cfg]# grep 192 *
bootstrap.kubeconfig:    server: https://192.168.56.60:6443
kubelet.kubeconfig:    server: https://192.168.56.60:6443
kube-proxy.kubeconfig:    server: https://192.168.56.60:6443
1
2
3
4
5
6
7
8
9
10
11
12
13

# 5.3 检查集群是否正常

'''1.在 lb-master上打开日志 '''
[root@lb-master keepalived]# tail -f /var/log/nginx/k8s-access.log 
'''2.在所有node节点重启组件,可以在k8s-access.log日志中看到请求,每个组件重启都会重新访问kube-apiserver '''
[root@k8s-node2 cfg]# systemctl restart kubelet
[root@k8s-node2 cfg]# systemctl restart kube-proxy
'''3.无论是在 k8s-master1还是 k8s-master2都可以检查到,节点状态时正常的 '''
[root@k8s-master2 opt]# kubectl get node
NAME      STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    <none>   2d6h   v1.16.0
k8s-node2   Ready    <none>   2d5h   v1.16.0
1
2
3
4
5
6
7
8
9
10
上次更新: 2024/3/13 15:35:10
04.部署WebUI和DNS工具
01.使用kubeadm快速部署一个k8s集群

← 04.部署WebUI和DNS工具 01.使用kubeadm快速部署一个k8s集群→

最近更新
01
04.数组双指针排序_子数组
03-25
02
08.动态规划
03-25
03
06.回溯算法
03-25
更多文章>
Theme by Vdoing | Copyright © 2019-2025 逍遥子 技术博客 京ICP备2021005373号
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式