沛县网站制作,石家庄市建设局官网,网站建设最新签约,自己的网站做弹出广告微服务
问#xff1a;用控制器来完成集群的工作负载#xff0c;那么应用如何暴漏出去#xff1f; 答#xff1a;需要通过微服务暴漏出去后才能被访问
Service 是一组提供相同服务的Pod对外开放的接口借助Service#xff0c;应用可以实现服务发现和负载均衡Service 默认只…微服务
问用控制器来完成集群的工作负载那么应用如何暴漏出去 答需要通过微服务暴漏出去后才能被访问
Service 是一组提供相同服务的Pod对外开放的接口借助Service应用可以实现服务发现和负载均衡Service 默认只支持4层负载均衡能力没有7层功能需要借助 Ingress 实现
微服务类型
微服务类型作用描述ClusterIP默认值k8s系统给service自动分配的虚拟IP只能在集群内部访问NodePort将Service通过指定的Node上的端口暴露给外部访问任意一个NodeIP:nodePort都将路由到ClusterIPLoadBalancer在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器并将请求转发到 NodeIP:NodePort此模式只能在云服务器上使用ExternalName将服务通过 DNS CNAME 记录方式转发到指定的域名通过 spec.externlName 设定
用例
[rootk8s-master ~]# kubectl create deployment mini--image myapp:v1 --replicas 2
# 生成控制器文件并建立控制器
[rootk8s-master ~]# kubectl create deployment mini--image myapp:v1 --replicas 2 --dry-runclient -o yaml mini.yaml
# 生成微服务Yaml追加到已有Yaml中
[rootk8s-master ~]# kubectl expose deployment mini--port 80 --target-port 80 --dry-runclient -o yaml mini.yaml
[rootk8s-master ~]# kubectl delete deployments.apps mini
[rootk8s-master ~]# vim mini.yaml
[rootk8s-master ~]# kubectl apply -f mini.yaml
[rootk8s-master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 none 443/TCP 9d
mini ClusterIP 10.104.255.78 none 80/TCP 42s微服务默认使用 iptables 调度 # 可以在火墙中查看到策略信息(一般在底下)
[rootk8s-master ~]# iptables -t nat -nL
...
KUBE-MARK-MASQ 6 -- !10.244.0.0/16 10.104.255.78 /* default/mini cluster IP */ tcp dpt:80
...IPVS 模式
Service 是由 kube-proxy 组件加上 iptables 来共同实现的kube-proxy 通过 iptables 处理 Service 的过程需要在宿主机上设置相当多的 iptables 规则如果宿主机有大量的Pod不断刷新iptables规则会消耗大量的CPU资源IPVS模式的service可以使K8s集群支持更多量级的Pod
IPVS 配置
# 所有节点安装 ipvsadm
dnf install ipvsadm -y
# 修改Master节点的代理配置
[rootk8s-master ~]# kubectl -n kube-system edit cm kube-proxymetricsBindAddress: mode: ipvs# 设置kube-system使用IPVS模式nftables:
# 当改变配置文件后,已运行的Pod状态不会改变,需要重启Pod
[rootk8s-master ~]# kubectl -n kube-system get pods | awk /kube-proxy/{system(kubectl -n kube-system delete pods $1)}
[rootk8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
mini-d5496d8f4-75khx 1/1 Running 0 15m 10.244.2.47 k8s-node2.org
mini-d5496d8f4-792mb 1/1 Running 0 15m 10.244.1.70 k8s-node1.org
[rootk8s-master ~]# ipvsadm -Ln
...
TCP 10.104.255.78:80 rr- 10.244.1.70:80 Masq 1 0 0- 10.244.2.47:80 Masq 1 0 0
...切换 IPVS 模式后kube-proxy会在宿主机上添加一个虚拟网卡kube-ipvs0并分配所有service IP [rootk8s-master ~]# ip a | tail
...inet 10.96.0.10/32 scope global kube-ipvs0valid_lft forever preferred_lft forever深入微服务类型
ClusterIP
ClusterIP 模式只能在集群内访问并对集群内的Pod提供健康检测和自动发现功能
ClusterIP 用例
[rootk8s-master ~]# vim mini.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: mininame: mini
spec:replicas: 2selector:matchLabels:app: minitemplate:metadata:creationTimestamp: nulllabels:app: minispec:containers:- image: myapp:v1name: myapp
---
apiVersion: v1
kind: Service
metadata:labels:app: mininame: mini
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: minitype: ClusterIP
# service 创建后 集群DNS 提供解析
[rootk8s-master ~]# dnf install bind-utils -y
[rootk8s-master ~]# dig mini.default.svc.cluster.local 10.96.0.10
...
;; ANSWER SECTION:
mini.default.svc.cluster.local. 30 IN A 10.104.255.78;; Query time: 3 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Oct 09 21:07:57 CST 2024
;; MSG SIZE rcvd: 117ClusterIP的另一种模式:HeadLess
HeadLess无头服务
对于无头 Services 并不会分配 Cluster IPkube-proxy 不会处理它们 而且平台也不会为它们进行负载均衡和路由集群访问通过 DNS 解析直接指向到业务 Pod 上的 IP所有的调度由 DNS 单独完成
HeadLess 用例
[rootk8s-master ~]# vim mini.yaml
...selector:app: minitype: ClusterIPclusterIP: None
[rootk8s-master ~]# kubectl delete -f mini.yaml
[rootk8s-master ~]# kubectl apply -f mini.yaml
# 测试
[rootk8s-master ~]# kubectl get service mini
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mini ClusterIP None none 80/TCP 18s
[rootk8s-master ~]# dig mini.default.svc.cluster.local 10.96.0.10
# mini.default.svc.cluster.local. 集群DNS
...
;; ANSWER SECTION:
mini.default.svc.cluster.local. 30 IN A 10.244.2.48
# 解析到Pod上
mini.default.svc.cluster.local. 30 IN A 10.244.1.71
...
kubectl get services mini
[rootk8s-master ~]# kubectl run ovo --image busyboxplus -it
/ # nslookup mini
/ # nslookup mini.default.svc.cluster.local.
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: mini.default.svc.cluster.local.
Address 1: 10.244.1.71 10-244-1-71.mini.default.svc.cluster.local
Address 2: 10.244.2.48 10-244-2-48.mini.default.svc.cluster.local
/ # curl mini
Hello MyApp | Version: v1 | a hrefhostname.htmlPod Name/a
/ # curl mini/hostname.html
mini-d5496d8f4-228g5
[rootk8s-master ~]# kubectl describe service mini
...
Endpoints: 10.244.1.71:80,10.244.2.48:80
...NodePort
通过 IPVS 暴漏端口从而使外部主机通过 Mater 节点的对外 IPPort 来访问 Pod 业务 访问过程NodePort —— ClusterIP —— Pods
NodePort 用例
[rootk8s-master ~]# vim mini.yaml
...selector:app: minitype: NodePort
[rootk8s-master ~]# kubectl delete -f mini.yaml
[rootk8s-master ~]# kubectl apply -f mini.yaml
[rootk8s-master ~]# kubectl get services mini
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mini NodePort 10.96.170.18 none 80:30835/TCP 3m22s
# nodeport在集群节点上绑定端口一个端口对应一个服务
[rootk8s-master ~]# kubectl describe service mini
...
NodePort: unset 30835/TCP
...
[rootk8s-master ~]# for i in {1..5}docurl 172.25.254.200:30835/hostname.htmldone
mini-d5496d8f4-cts8s
mini-d5496d8f4-9v24v
mini-d5496d8f4-cts8s
mini-d5496d8f4-9v24v
mini-d5496d8f4-cts8sNodePort 默认端口是 30000—32767超出会报错 如果需要使用范围外的端口就需要特殊设定
vim /etc/kubernetes/manifests/kube-apiserver.yaml
# 需要增加到- command:
- --service-node-port-range30000-40000添加 --service-node-port-range 参数端口范围可以自定义 修改后 api-server 会自动重启等 apiserver 正常启动后才能操作集群 集群重启自动完成在修改完参数后全程不需要人为干预 LoadBalancer
云平台会为我们分配vip并实现访问如果是裸金属主机那么需要metallb来实现ip的分配 过程LoadBalancer —— NodePort —— ClusterIP —— Pods
LoadBalancer 用例
[rootk8s-master ~]# vim mini.yaml
...selector:app: minitype: LoadBalancer
[rootk8s-master ~]# kubectl delete -f mini.yaml
[rootk8s-master ~]# kubectl apply -f mini.yaml
# 默认无法分配外部访问IP
[rootk8s-master ~]# kubectl get svc mini
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mini LoadBalancer 10.102.105.5 pending 80:31759/TCP 18sLoadBalancer 模式适用云平台裸金属环境需要安装 MetalLB提供支持
MetalLB 官网https://metallb.universe.tf/installation/ MetalLB功能为 LoadBalancer 分配 VIP
MetalLB 配置
# 设置 IPVS 模式
[rootk8s-master ~]# kubectl edit cm -n kube-system kube-proxy
...metricsBindAddress: mode: ipvsipvs:strictARP: true
...
[rootk8s-master ~]# kubectl -n kube-system get pods | awk /kube-proxy/{system(kubectl -n kube-system delete pods $1)}
# 下载部署文件
[rootk8s-master ~]# dnf install wget -y
[rootk8s-master ~]# wget https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml
# 修改文件镜像拉取地址(配置好Docker拉取镜像默认地址)
...
image: metallb/controller:v0.14.8
...
image: metallb/speaker:v0.14.8
...
# 上传镜像到harbor仓库
[rootk8s-master ~]# docker pull quay.io/metallb/controller:v0.14.8
[rootk8s-master ~]# docker pull quay.io/metallb/speaker:v0.14.8
[rootk8s-master ~]# docker tag quay.io/metallb/speaker:v0.14.8 ooovooo.org/metallb/speaker:v0.14.8
[rootk8s-master ~]# docker tag quay.io/metallb/controller:v0.14.8 ooovooo.org/metallb/controller:v0.14.8
[rootk8s-master ~]# docker push ooovooo.org/metallb/speaker:v0.14.8
[rootk8s-master ~]# docker push ooovooo.org/metallb/controller:v0.14.8
# 部署服务
[rootk8s-master ~]# kubectl apply -f metallb-native.yaml
[rootk8s-master ~]# kubectl -n metallb-system get pods
NAME READY STATUS RESTARTS AGE
controller-65957f77c8-c9lrv 1/1 Running 0 23s
speaker-5g4hz 1/1 Running 0 23s
speaker-bw4qh 1/1 Running 0 23s
speaker-t7d7f 1/1 Running 0 23s
# 配置分配地址段
[rootk8s-master ~]# vim configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:name: name# 地址池名称namespace: metallb-system
spec:addresses:- 172.25.254.25-172.25.254.50# 地址池段
---
# 不同的kind之间使用---分割
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:name: examplenamespace: metallb-system
spec:ipAddressPools:- name# 使用的地址池
[rootk8s-master ~]# kubectl apply -f configmap.yml
[rootk8s-master ~]# kubectl get services mini
# 自动分配IP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mini LoadBalancer 10.102.105.5 172.25.254.25 80:31759/TCP 62m
# 通过分配地址从集群外访问服务
[rootk8s-master ~]# curl 172.25.254.25
Hello MyApp | Version: v1 | a hrefhostname.htmlPod Name/a
ExternalName
开启 services 后不会被分配 IP而是用 DNS 解析 CNAME 固定域名来解决 IP 变化问题一般应用于外部业务和 Pod 沟通或外部业务迁移到 Pod 内时在应用向集群迁移过程中ExternalName在过度阶段就可以起作用了集群外的资源迁移到集群时在迁移的过程中 IP 可能会变化但是 域名DNS解析 能完美解决此问题
ExternalName 用例
[rootk8s-master ~]# vim mini.yaml
...selector:app: minitype: ExternalNameexternalName: www.mini.org
[rootk8s-master ~]# kubectl delete -f mini.yaml
[rootk8s-master ~]# kubectl apply -f mini.yaml
[rootk8s-master ~]# kubectl get services mini
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mini ExternalName none www.mini.org 80/TCP 5sIngress-Nginx 官网https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters Ingress-Nginx 功能
一种全局的、为了代理不同后端 Service 而设置的负载均衡服务支持7层Ingress由两部分组成Ingress controller和Ingress服务Ingress Controller 会根据你定义的 Ingress 对象提供对应的代理能力
部署 Ingress
[rootk8s-master ~]# mkdir ingress
[rootk8s-master ~]# cd ingress/
# 下载部署文件
[rootk8s-master ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml
# 还要下载ingress-nginx的镜像
# 上传镜像到harbor
[rootk8s-master ~]# docker tag reg.harbor.org/ingress-nginx/controller:v1.11.2 ooovooo.org/ingress-nginx/controller:v1.11.2
[rootk8s-master ~]# docker tag reg.harbor.org/ingress-nginx/kube-webhook-certgen:v1.4.3 ooovooo.org/ingress-nginx/kube-webhook-certgen:v1.4.3
[rootk8s-master ~]# docker push ooovooo.org/ingress-nginx/controller:v1.11.2
[rootk8s-master ~]# docker push ooovooo.org/ingress-nginx/kube-webhook-certgen:v1.4.3
# 安装Ingress
[rootk8s-master ~]# vim deploy.yaml
...
image: ingress-nginx/controller:v1.11.2
...
image: ingress-nginx/kube-webhook-certgen:v1.4.3
...
image: ingress-nginx/kube-webhook-certgen:v1.4.3
[rootk8s-master ingress]# kubectl apply -f deploy.yaml
[rootk8s-master ingress]# kubectl -n ingress-nginx get pods
# 一开始可能会有一个error,删掉再加载就好了
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-w6jz9 0/1 Completed 0 14s
ingress-nginx-admission-patch-bbsn6 0/1 Completed 1 14s
ingress-nginx-controller-bb7d8f97c-nx96n 1/1 Running 0 14s
# ingress-nginx-controller 1/1 Running 即运行成功
[rootk8s-master ~]# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.100.33.214 none 80:32416/TCP,443:30320/TCP 30s
ingress-nginx-controller-admission ClusterIP 10.98.75.102 none 443/TCP 30s
# 修改微服务为loadbalancer
[rootk8s-master ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
...
49 type: LoadBalancer
# 查看修改结果
[rootk8s-master ~]# kubectl -n ingress-nginx get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
# 需要配置有 MetalLB
ingress-nginx-controller LoadBalancer 10.100.33.214 172.25.254.25 80:32416/TCP,443:30320/TCP
ingress-nginx-controller-admission ClusterIP 10.98.75.102 none 443/TCP在 kubectl -n ingress-nginx get services 中 的 EXTERNAL-IP 即 Ingress 最终对外的 IP 测试 Ingress
[rootk8s-master ingress]# kubectl create deployment myappv1 --image myapp:v1 --dry-runclient -o yaml myappv1.yml
[rootk8s-master ingress]# kubectl apply -f myappv1.yml
[rootk8s-master ingress]# kubectl expose deployment myappv1 --port 80 --target-port 80 --dry-runclient -o yaml myappv1.yml
[rootk8s-master ingress]# vim myappv1.yml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: myappv1name: myappv1
spec:replicas: 1selector:matchLabels:app: myappv1strategy: {}template:metadata:labels:app: myappv1spec:containers:- image: myapp:v1name: myapp
---
apiVersion: v1
kind: Service
metadata:labels:app: myappv1name: myappv1
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myappv1
kubectl apply -f myappv1.yml
kubectl create ingress webcluster --rule */ooovooo-svc:80 --dry-runclient -o yaml ingress.yml
[rootk8s-master ingress]# vim ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: myappv1
spec:ingressClassName: nginxrules:- http:paths:- backend:service:name: myappv1# 与自己的服务名保持一致port:number: 80path: /pathType: Prefix# Exact精确匹配# ImplementationSpecific特定实现# Prefix前缀匹配# Regular expression正则表达式匹配
# 建立Ingress控制器
kubectl apply -f ingress.yml
# 根据kubectl -n ingress-nginx get services中的IP进行访问
[rootk8s-master ingress]# curl 172.25.254.25
Hello MyApp | Version: v1 | a hrefhostname.htmlPod Name/aIngress 必须和输出的 service 资源处于同一 namespace 中 Ingress 高级用法
基于路径的访问
[rootk8s-master ingress]# vim ingress1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/rewrite-target: /name: ingress1
spec:ingressClassName: nginxrules:- host: www.ooovooo.orghttp:paths:- backend:service:name: myappv1port:number: 80path: /v1pathType: Prefix- backend:service:name: myappv2port:number: 80path: /v2pathType: Prefix
[rootk8s-master ingress]# kubectl apply -f ingress1.yml
[rootk8s-master ingress]# echo 172.25.254.25 www.ooovooo.org /etc/hosts
[rootk8s-master ingress]# curl www.ooovooo.org/v1
Hello MyApp | Version: v1 | a hrefhostname.htmlPod Name/a
[rootk8s-master ingress]# curl www.ooovooo.org/v2
Hello MyApp | Version: v2 | a hrefhostname.htmlPod Name/a
[rootk8s-master ingress]# curl www.ooovooo.org/v2/haha
Hello MyApp | Version: v2 | a hrefhostname.htmlPod Name/a
[rootk8s-master ingress]# curl www.ooovooo.org/v1/gaga
Hello MyApp | Version: v1 | a hrefhostname.htmlPod Name/a基于域名的访问
[rootk8s-master ingress]# vim ingress2.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/rewrite-target: /name: ingress2
spec:ingressClassName: nginxrules:- host: myappv1.ooovooo.orghttp:paths:- backend:service:name: myappv1port:number: 80path: /pathType: Prefix- host: myappv2.ooovooo.orghttp:paths:- backend:service:name: myappv2port:number: 80path: /pathType: Prefix
[rootk8s-master ingress]# kubectl apply -f ingress2.yml
[rootk8s-master ingress]# kubectl delete -f ingress1.yml
[rootk8s-master ingress]# kubectl describe ingress ingress2
...Host Path Backends---- ---- --------myappv1.ooovooo.org/ myappv1:80 (10.244.1.89:80)myappv2.ooovooo.org/ myappv2:80 (10.244.1.90:80)
[rootk8s-master ingress]# curl myappv1.ooovooo.org
Hello MyApp | Version: v1 | a hrefhostname.htmlPod Name/a
[rootk8s-master ingress]# curl myappv2.ooovooo.org
Hello MyApp | Version: v2 | a hrefhostname.htmlPod Name/a建立 TLS 加密
[rootk8s-master ingress]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj /CNnginxsvc/Onginxsvc -out tls.crt
[rootk8s-master ingress]# kubectl create secret tls web-tls-secret --key tls.key --cert tls.crt
[rootk8s-master ingress]# vim ingress3.yml
[rootk8s-master ingress]# echo 172.25.254.25 myapp-tls.ooovooo.org /etc/hosts
[rootk8s-master ingress]# kubectl apply -f ingress3.yml
[rootk8s-master ingress]# kubectl delete -f ingress2.yml
# 在Windows主机添加解析,并进行访问建立 AUTH 认证
[rootk8s-master ingress]# vim ingress4.yml
[rootk8s-master ingress]# kubectl delete -f ingress3.yml
[rootk8s-master ingress]# kubectl apply -f ingress4.yml
[rootk8s-master ingress]# kubectl describe ingress ingress4
...
TLS:web-tls-secret terminates myapp-tls.ooovooo.org
...myapp-tls.ooovooo.org/ myappv1:80 (10.244.1.89:80)
...
[rootk8s-master ingress]# curl -k https://myapp-tls.ooovooo.org
html
headtitle401 Authorization Required/title/head
body
centerh1401 Authorization Required/h1/center
hrcenternginx/center
/body
/html
[rootk8s-master ingress]# curl -k https://myapp-tls.ooovooo.org -u ovo:aaa
Hello MyApp | Version: v1 | a hrefhostname.htmlPod Name/a
# Windows主机访问同样需要登录Rewrite 重定向
# 将指定访问文件重定向到hostname.html上
[rootk8s-master ingress]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/app-root: /hostname.htmlname: ingress5
spec:ingressClassName: nginxrules:- host: myapp-tls.ooovooo.orghttp:paths:- backend:service:name: myappv1port:number: 80path: /# 当访问/时,重定向到hostname.htmlpathType: Prefix
[rootk8s-master ingress]# kubectl delete -f ingress4.yml
[rootk8s-master ingress]# kubectl apply -f ingress5.yml
[rootk8s-master ingress]# curl -Lk https://myapp-tls.ooovooo.org -u ovo:aaa
myappv1-586444467f-w4dxn
[rootk8s-master ingress]# curl -Lk https://myapp-tls.ooovooo.org/haha/hostname.html -u ovo:aaa
html
headtitle404 Not Found/title/head
body bgcolorwhite
centerh1404 Not Found/h1/center
hrcenternginx/1.12.2/center
/body
/html
# 以上存在一个问题,当有多路径时需要重定向时,需要配置多个,费人力
# 正则解决指定路径问题
[rootk8s-master ingress]# vim ingress6.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/rewrite-target: /$2nginx.ingress.kubernetes.io/use-regex: truename: ingress6
spec:ingressClassName: nginxrules:- host: myapp-tls.ooovooo.orghttp:paths:- backend:service:name: myapp-v1port:number: 80path: /pathType: Prefix- backend:service:name: myappv1port:number: 80path: /haha(/|$)(.*)pathType: ImplementationSpecific
[rootk8s-master ingress]# kubectl delete -f ingress5.yml
[rootk8s-master ingress]# kubectl apply -f ingress6.yml
[rootk8s-master ingress]# curl -Lk https://myapp-tls.ooovooo.org/haha/hostname.html -u ovo:aaa
myappv1-586444467f-w4dxnCanary 金丝雀发布
金丝雀发布Canary Release也称为灰度发布是一种软件发布策略
主要目的是在将新版本的软件全面推广到生产环境之前先在一小部分用户或服务器上进行测试和验证以降低因新版本引入重大问题而对整个系统造成的影响是一种 Pod 的发布方式
金丝雀发布采取先添加、再删除的方式保证Pod的总量不低于期望值。并且在更新部分Pod后暂停更新当确认新Pod版本运行正常后再进行其他版本的Pod的更新
发布方式
Header Cookie Weiht 其中 Header 和 Weiht 中的最多
基于Header HTTP包头灰度
通过Annotaion扩展创建灰度 Ingress配置灰度头部 key 以及 value灰度流量验证完毕后切换正式 Ingress 到新版本之前我们在做升级时可以通过控制器做滚动更新默认25%利用Header 可以使升级更为平滑通过 key 和 value 测试新的业务体系是否有问题
# 创建版本v1的ingress
[rootk8s-master ingress]# kubectl delete -f ingress6.yml
[rootk8s-master ingress]# vim ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:name: myapp-v1-ingress
spec:ingressClassName: nginxrules:- host: myapp-tls.ooovooo.orghttp:paths:- backend:service:name: myappv1port:number: 80path: /pathType: Prefix
[rootk8s-master ingress]# kubectl apply -f ingress7.yml
# 建立基于header的ingress
[rootk8s-master ingress]# vim ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/canary: truenginx.ingress.kubernetes.io/canary-by-header: versionnginx.ingress.kubernetes.io/canary-by-header-value: 2name: myapp-v2-ingress
spec:ingressClassName: nginxrules:- host: myapp-tls.ooovooo.orghttp:paths:- backend:service:name: myappv2port:number: 80path: /pathType: Prefix
[rootk8s-master ingress]# kubectl apply -f ingress8.yml
# 进行测试
[rootk8s-master ingress]# curl myapp-tls.ooovooo.org
Hello MyApp | Version: v1 | a hrefhostname.htmlPod Name/a
[rootk8s-master ingress]# curl -H version: 2 myapp-tls.ooovooo.org
Hello MyApp | Version: v2 | a hrefhostname.htmlPod Name/a
基于权重的灰度发布
通过 Annotaion 拓展创建灰度 Ingress配置灰度权重以及总权重灰度流量验证完毕后切换正式 Ingress 到新版本
[rootk8s-master ingress]# vim ingress9.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/canary: truenginx.ingress.kubernetes.io/canary-weight: 10# 灰度权重nginx.ingress.kubernetes.io/canary-weight-total: 100# 总权重name: myapp-v2-ingress
spec:ingressClassName: nginxrules:- host: myapp-tls.ooovooo.orghttp:paths:- backend:service:name: myappv2port:number: 80path: /pathType: Prefix
[rootk8s-master ingress]# kubectl delete -f ingress8.yml
[rootk8s-master ingress]# kubectl apply -f ingress9.yml
[rootk8s-master ingress]# vim check_ingress.sh
#!/bin/bash
v10
v20for (( i0; i100; i))
doresponsecurl -s myapp-tls.ooovooo.org |grep -c v1v1expr $v1 $responsev2expr $v2 1 - $responsedone
echo v1:$v1, v2:$v2
[rootk8s-master ingress]# chmod x check_ingress.sh
[rootk8s-master ingress]# sh check_ingress.sh
v1:89, v2:11
# 根据不同灰度权重而不同