diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/.keep" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/.keep"
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/README.md" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/README.md"
new file mode 100644
index 0000000000000000000000000000000000000000..1d00407c2ba03c905d94f69d902db5461a3fd901
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/README.md"
@@ -0,0 +1,487 @@
+# 负载均衡&反向代理Lab演示
+
+#### 环境介绍
+
+本次Lab环境会使用4个Nginx Plus实例,实例部署的OS不限,可以是Docker环境,Ubuntu,Centos等,由于演示过程中需要用到一些网络工具,故选用了Ubuntu 18.04.5,而每个Nginx Plus实例用途如下:
+1. 使用1个Nginx Plus实例作为负载均衡以及反向代理
+2. 使用3个Nginx Plus实例作为Backend web server
+
+通过在负载均衡N+实例上模拟演示以下N+专有LB功能:
+
+ **Charter 1**
+
+1. LB method - Least_time
+2. LB method - Random with two parameter - Least time
+
+ **Charter 2**
+
+1. Cookie Persistence - Sticky cookie
+2. Cookie Persistence - Sticky learn
+3. Cookie Persistence - Sticky route
+
+ **Charter 3**
+
+1. API demonstration
+
+
+#### 准备工具
+
+1. TC 安装于web server,用于增加网络延时
+2. Curl 客户端测试工具
+3. Chrome浏览器 Devtool控制台
+
+
+
+#### 后端Web server页面准备
+
+分别在3个web server上修改Nginx plus的页面内容,通过vim /usr/share/nginx/html/index.html,修改其页面内容,区分web1,web2,web3.
+
+
+```
+Web1
+
Nginx Plus 1
+Web2:
+Nginx Plus 2
+Web3:
+Nginx Plus 3
+
+分别在不同实例上nginx -s reload使其生效
+```
+
+#### Charter 1 Least_time
+
+1. 在用作LB&反向代理的N+实例中,编辑vi /etc/nginx/nginx.conf,添加backend server
+```
+upstream backend {
+ least_time last_byte;
+ server 192.168.5.30;
+ server 192.168.5.32;
+ server 192.168.5.33;
+}
+```
+
+2. 编辑vi /etc/nginx/conf.d/least_time.conf,复制粘贴以下配置文件,实现基于backend组的负载均衡转发,为了直观查看least_time last_byte的效果,以下还打开了Nginx API以及Dashboard方便查看。
+```
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
+```
+
+3. nginx -s reload加载配置
+
+4. 算法Least_time last_byte是N+基于每个server的整体response响应时间(Header+Body)进行负载判断,本次演示通过在任意两台web上增加其网络延时,观察是否大量请求分发至延时低的web server.
+```
+#分别在任意两台web上输入以下命令,ens32为网络接口,请根据自己接口id修改:
+tc qdisc add dev ens32 root netem delay 500ms
+```
+5. 多次访问http://lb实例,并通过http://lb实例/dashboard.html验证backend upstream中的分发情况,可以从下图看到,大量请求被分配到延时低的web上。
+
+
+
+#### Charter 1 Random
+
+Random算法其实在OSS里面就已经具备,OSS版本中会随机挑选upstream可用的服务器进行分发,而在Nginx plus版本中增加了参数two,并可搭配least_time=header与least_time=last_byte进行使用,实现随机在upstream阻里抽取2个server,然后使用类似least_time的响应时间判断,两者取最优进行分发。
+
+
+1. 在用作LB&反向代理的N+实例中,编辑修改vi /etc/nginx/nginx.conf,修改upsteam backend中的算法为如下:
+```
+upstream backend {
+ random two least_time=last_byte;
+ server 192.168.5.30;
+ server 192.168.5.32;
+ server 192.168.5.33;
+}
+```
+
+2. 编辑vi /etc/nginx/conf.d/random.conf,复制粘贴以下配置文件,实现基于backend组的负载均衡转发,以下还打开了Nginx API以及Dashboard方便查看。
+```
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
+```
+
+3. nginx -s reload加载配置
+
+4. random可以通过对其中两台服务器增加其延时进行效果查看,我们将其中2台web的延时增加至500以及300。
+```
+#该算法会随机抽取2个server进行整体response(header+body)响应时间比较,选取低延时的1台
+#分别在任意两台web上输入以下命令,ens32为网络接口,请根据自己接口id修改:
+web2:
+tc qdisc add dev ens32 root netem delay 500ms
+web3:
+tc qdisc add dev ens32 root netem delay 300ms
+```
+
+5. 多次访问http://lb实例,并通过http://lb实例/dashboard.html验证backend upstream中的分发情况,可以从下图看到,大量请求被分配到延时最低的web上,部分请求分配到300ms的web上,仅有少量请求到达500ms的web。
+
+
+
+#### Charter 2 Sticky cookie
+Nginx Plus增强了会话保持功能,增加了基于应用层cookie会话保持,本章将通过lab验证3种cookie会话保持功能,第一种sticky cookie由Nginx Plus插入cookie name以及value至HTTP响应中,通过查询请求中是否带有该cookie,从而实现服务器会话保持。
+
+1. 在用作LB&反向代理的N+实例中,编辑vi /etc/nginx/nginx.conf,在upstream block中添加sticky cookie指令并保存。
+```
+upstream backend {
+ zone backend 64k;
+ server 192.168.5.30;
+ server 192.168.5.32;
+ server 192.168.5.33;
+ sticky cookie test expires=1h path=/;
+}
+#sticky cookie指令插入名为test的cookie,该cookie超时时间为1小时,只要访问路径为根目录下的所有目录均携带该cookie。
+```
+完整stikcy cookie用法请参考https://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky_cookie
+
+2. 编辑vi /etc/nginx/conf.d/cookie.conf,复制粘贴以下配置文件,实现基于backend组的负载均衡转发,为了直观查看least_time last_byte的效果,以下还打开了Nginx API以及Dashboard方便查看。
+```
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
+```
+
+3. nginx -s reload
+
+4. 使用Chrome浏览器的F12控制台Network选项,多次访问http://lb实例,可以看到首次访问HTTP响应中含有test的cookie,在后续携带test cookie多次访问,仍然在同一台服务器。
+
+
+#### Charter 2 Sticky learn
+
+stikcy learn由Nginx plus学习服务器HTTP响应中的指定cookie,如果后续请求包含一个已经“学习”的cookie,则NGINX Plus将请求转发到相应的服务器,该功能需要使用shared memory,1M大约支持4000会话。
+
+1. 在用作LB&反向代理的N+实例中,编辑修改vi /etc/nginx/nginx.conf,修改upsteam block中增加sticky learn指令如下下:
+```
+upstream backend {
+ server 192.168.5.32;
+ server 192.168.5.33;
+ sticky learn
+ create=$upstream_cookie_jsessionid
+ lookup=$cookie_jsessionid
+ zone=client_session:1m
+ timeout=1h;
+ }
+```
+
+完整stikcy cookie用法请参考https://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky_learn
+
+2. 编辑vi /etc/nginx/conf.d/learn.conf,复制粘贴以下配置文件,实现基于backend组的负载均衡转发,以下还打开了Nginx API以及Dashboard方便查看。
+```
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
+```
+
+3. 修改两台Nginx plus Web服务器中的配置。
+```
+#web1服务器:vi /etc/nginx/conf.d/web1.conf
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ add_header Set-Cookie "jsessionid=1111aaaabbbb2222.a";
+ }
+}
+
+
+
+
+#web2服务器:vi /etc/nginx/conf.d/web2.conf
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ add_header Set-Cookie "jsessionid=5555xxxxyyyy6666.b";
+ }
+}
+```
+
+4. 上述修改后的配置,均要使用nginx -s reload使其配置生效。
+
+5. 使用Chrome浏览器的F12控制台Network选项,多次访问http://lb实例,可以看到首次访问HTTP响应中含有jsessionid的cookie,在后续携带cookie jsessionid多次访问,仍然在同一台服务器。
+
+
+#### Charter 2 Sticky route
+
+Sticky route在前面两种cookie会话保持基础上,更细颗粒度的基于cookie值进行会话保持控制,该章节会使用map函数进行查询赋值,摘取cookie值最右一个"."后的字符,如果为a,则会话保持至web1,如果为b,则保持至web2。
+
+1. 在用作LB&反向代理的N+实例中,编辑修改vi /etc/nginx/nginx.conf,修改upsteam block中增加sticky route指令如下:
+
+
+```
+map $cookie_jsessionid $route_cookie { ~.+\.(?P\w+)$ $route; }
+map $request_uri $route_uri { ~jsessionid=.+\.(?P\w+)$ $route; }
+upstream backend {
+ server 192.168.5.32 route=a;
+ server 192.168.5.33 route=b;
+ sticky route $route_cookie $route_uri;
+ }
+
+#通过map进行变量获取,上面的语法会经过2层判断,
+
+第一层:如果响应的cookie中带有jsessionid,将根据正则表达式,采集cookie值中最后”.”之后的内容,赋值至$route.
+
+第二层:如果请求的uri中带有jsessionid,将根据正则表达式,采集值中最后”.”之后的内容,赋值至$route.
+
+最终通过sticky route校验内容,如果是route=a,分发至192.168.5.32,如果是route=b,分发至192.168.5.33.
+```
+
+2. 编辑vi /etc/nginx/conf.d/route.conf,复制粘贴以下配置文件,实现基于backend组的负载均衡转发,以下还打开了Nginx API以及Dashboard方便查看。
+```
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
+```
+
+3. 修改两台Nginx plus Web服务器中的配置,在HTTP响应中插入jsessionid,以供Nginx Plus判断。
+```
+#web1服务器:vi /etc/nginx/conf.d/web1.conf
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ add_header Set-Cookie "jsessionid=1111aaaabbbb2222.a";
+ }
+}
+
+
+
+
+#web2服务器:vi /etc/nginx/conf.d/web2.conf
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ add_header Set-Cookie "jsessionid=5555xxxxyyyy6666.b";
+ }
+}
+```
+
+4. 上述修改后的配置,均要使用nginx -s reload使其配置生效。
+
+5. 使用Chrome浏览器的F12控制台Network选项,多次访问http://lb实例,可以看到首次访问HTTP响应中含有jsessionid的cookie,在后续携带cookie jsessionid多次访问,仍然在同一台服务器。
+
+
+#### Charter 3 API演示
+
+Nginx Plus API区别于OSS版本有了质的提升,Plus丰富的API接口不仅可以动态加载配置,无需reload配置或者重启进程,得益于以下几个功能:
+
+自动伸缩性,随时随地通过API在线热添加更多的服务器至upstream;
+
+可维护性,可临时对服务器进行删除,backup和下线操作;
+
+易安装,可随时修改服务器中的属性,如weight值,活跃连接数,slow start,failure timeouts等;
+
+可观测性,一个命令即可掌握服务器的各种状态信息;
+
+1. 要使用Nginx Plus API功能控制upstream等配置修改需要满足2个条件,第一个条件是在upstream配置中使用共享内存,配置如下:
+
+```
+upstream backend {
+ zone backend 64k;
+ server 192.168.5.32;
+ server 192.168.5.33;
+ }
+```
+
+2. 编辑vi /etc/nginx/conf.d/API.conf,复制粘贴以下配置文件,打开了Nginx API以及Dashboard。
+```
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
+```
+
+3. 利用curl工具,输出目前upstream信息。
+```
+curl -X GET "http://n+实例IP/api/6/http/upstreams/" -H "accept: application/json"
+{"backend":{"peers":[{"id":0,"server":"192.168.5.32:80","name":"192.168.5.32","backup":false,"weight":1,"state":"up","active":0,"requests":3,"header_time":19,"response_time":21,"responses":{"1xx":0,"2xx":2,"3xx":0,"4xx":1,"5xx":0,"total":3},"sent":1281,"received":282942,"fails":0,"unavail":0,"health_checks":{"checks":0,"fails":0,"unhealthy":0},"downtime":0,"selected":"2021-10-04T02:30:21Z"},{"id":1,"server":"192.168.5.33:80","name":"192.168.5.33","backup":false,"weight":1,"state":"up","active":0,"requests":3,"header_time":40,"response_time":44,"responses":{"1xx":0,"2xx":3,"3xx":0,"4xx":0,"5xx":0,"total":3},"sent":1399,"received":988120,"fails":0,"unavail":0,"health_checks":{"checks":0,"fails":0,"unhealthy":0},"downtime":0,"selected":"2021-10-04T02:30:21Z"}],"keepalive":0,"zombies":0,"zone"
+```
+4. 利用Nginx plus的API在线热加载配置功能,实现upstream成员的增加删除以及修改,以下通过POST添加成员。
+```
+#Upstream成员增加,注意,由于backup属性在创建过后不能进行热修改,所以必须在创建时指定好其属性。
+curl -X POST "http://192.168.5.31/api/6/http/upstreams/backend/servers/" -H "accept: application/json" -H "Content-Type: application/json" -d "{
+\"server\": \"192.168.5.30:80\",
+\"weight\": 1, \"max_conns\": 0,
+\"max_fails\": 0,
+\"fail_timeout\": \"10s\",
+\"slow_start\": \"10s\",
+\"route\": \"\",
+\"backup\": false,
+\"down\": false
+}"
+
+#output
+{
+ "id": 2,
+ "server": "192.168.5.30:80",
+ "weight": 1,
+ "max_conns": 0,
+ "max_fails": 0,
+ "fail_timeout": "10s",
+ "slow_start": "10s",
+ "route": "",
+ "backup": false,
+ "down": false
+}
+
+#Upstream成员属性更新,通过PATCH修改id为2的服务器属性,把weight改成5,backup和down都改成true,url为http://192.168.5.31/api/6/http/upstreams/backend/servers/2
+curl -X PATCH "http://192.168.5.31/api/6/http/upstreams/backend/servers/2" -H "accept: application/json" -H "Content-Type: application/json" -d "{
+\"server\": \"192.168.5.30:80\",
+\"weight\": 5,
+\"max_conns\": 0,
+\"max_fails\": 0,
+\"fail_timeout\": \"10s\",
+\"slow_start\": \"10s\",
+\"route\": \"\",
+\"backup\": true,
+\"down\": true
+}"
+
+#output
+{
+ "id": 2,
+ "server": "10.0.0.1:8089",
+ "weight": 5,
+ "max_conns": 0,
+ "max_fails": 0,
+ "fail_timeout": "10s",
+ "slow_start": "10s",
+ "route": "",
+ "backup": false,
+ "down": true
+}
+#再次强调backup属性是不能在线修改的,所以即使通过patch把backup改成true,回显backup依然保持初始创建状态false。
+
+#Upstream成员属性更新,通过DELETE删除id为2的成员。
+curl -X DELETE "http://192.168.5.31/api/6/http/upstreams/backend/servers/2" -H "accept: application/json"
+
+#output
+[
+ {
+ "id": 0,
+ "server": "192.168.5.32:80",
+ "weight": 1,
+ "max_conns": 0,
+ "max_fails": 1,
+ "fail_timeout": "10s",
+ "slow_start": "0s",
+ "route": "",
+ "backup": false,
+ "down": false
+ },
+ {
+ "id": 1,
+ "server": "192.168.5.33:80",
+ "weight": 1,
+ "max_conns": 0,
+ "max_fails": 1,
+ "fail_timeout": "10s",
+ "slow_start": "0s",
+ "route": "",
+ "backup": false,
+ "down": false
+ }
+]
+```
+
+如需详细了解Nginx Plus API属性,请参考http://_NGINX-host_/swagger-ui/
\ No newline at end of file
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/.keep" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/.keep"
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/cookie.conf" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/cookie.conf"
new file mode 100644
index 0000000000000000000000000000000000000000..8c47f242d52cc6a577812a2b81afc70c23cba6e1
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/cookie.conf"
@@ -0,0 +1,15 @@
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/learn.conf" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/learn.conf"
new file mode 100644
index 0000000000000000000000000000000000000000..8c47f242d52cc6a577812a2b81afc70c23cba6e1
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/learn.conf"
@@ -0,0 +1,15 @@
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/least_time.conf" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/least_time.conf"
new file mode 100644
index 0000000000000000000000000000000000000000..8c47f242d52cc6a577812a2b81afc70c23cba6e1
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/least_time.conf"
@@ -0,0 +1,15 @@
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/random.conf" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/random.conf"
new file mode 100644
index 0000000000000000000000000000000000000000..8c47f242d52cc6a577812a2b81afc70c23cba6e1
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/random.conf"
@@ -0,0 +1,15 @@
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/route.conf" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/route.conf"
new file mode 100644
index 0000000000000000000000000000000000000000..8c47f242d52cc6a577812a2b81afc70c23cba6e1
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/conf.d/route.conf"
@@ -0,0 +1,15 @@
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/.keep" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/.keep"
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.cookie" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.cookie"
new file mode 100644
index 0000000000000000000000000000000000000000..3a56d539cf881da3357cf82d1f35c9cdcd347d93
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.cookie"
@@ -0,0 +1,57 @@
+
+user nginx;
+worker_processes auto;
+
+error_log /var/log/nginx/error.log notice;
+pid /var/run/nginx.pid;
+
+
+events {
+ worker_connections 1024;
+}
+
+
+http {
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ # '$status $body_bytes_sent "$http_referer" '
+ # '$hostname and $host'
+ # '$http_x_forwarded_for'
+ # '"$http_user_agent" "$http_x_forwarded_for"';
+ log_format main '“$time_local” client=$remote_addr '
+ 'method=$request_method request="$request” '
+ 'request_length=$request_length '
+ 'status=$status bytes_sent=$bytes_sent '
+ 'body_bytes_sent=$body_bytes_sent '
+ 'referer=$http_referer '
+ 'user_agent="$http_user_agent" '
+ 'host=$host'
+ 'xff=$http_x_forwarded_for'
+ 'upstream_addr=$upstream_addr '
+ 'upstream_status=$upstream_status '
+ 'request_time=$request_time '
+ 'upstream_response_time=$upstream_response_time '
+ 'upstream_connect_time=$upstream_connect_time '
+ 'upstream_header_time=$upstream_header_time';
+
+ access_log /var/log/nginx/access.log main;
+
+ sendfile on;
+ #tcp_nopush on;
+
+ keepalive_timeout 65;
+
+ #gzip on;
+
+ include /etc/nginx/conf.d/*.conf;
+
+upstream backend {
+ zone backend 64k;
+ server 192.168.5.30;
+ server 192.168.5.32;
+ server 192.168.5.33;
+ sticky cookie test expires=1h path=/;
+}
+}
\ No newline at end of file
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.learn" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.learn"
new file mode 100644
index 0000000000000000000000000000000000000000..8d3eec3c868af0a65272ee424fd0140fcc809439
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.learn"
@@ -0,0 +1,58 @@
+
+user nginx;
+worker_processes auto;
+
+error_log /var/log/nginx/error.log notice;
+pid /var/run/nginx.pid;
+
+
+events {
+ worker_connections 1024;
+}
+
+http {
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ # '$status $body_bytes_sent "$http_referer" '
+ # '$hostname and $host'
+ # '$http_x_forwarded_for'
+ # '"$http_user_agent" "$http_x_forwarded_for"';
+ log_format main '“$time_local” client=$remote_addr '
+ 'method=$request_method request="$request” '
+ 'request_length=$request_length '
+ 'status=$status bytes_sent=$bytes_sent '
+ 'body_bytes_sent=$body_bytes_sent '
+ 'referer=$http_referer '
+ 'user_agent="$http_user_agent" '
+ 'host=$host'
+ 'xff=$http_x_forwarded_for'
+ 'upstream_addr=$upstream_addr '
+ 'upstream_status=$upstream_status '
+ 'request_time=$request_time '
+ 'upstream_response_time=$upstream_response_time '
+ 'upstream_connect_time=$upstream_connect_time '
+ 'upstream_header_time=$upstream_header_time';
+
+ access_log /var/log/nginx/access.log main;
+
+ sendfile on;
+ #tcp_nopush on;
+
+ keepalive_timeout 65;
+
+ #gzip on;
+
+ include /etc/nginx/conf.d/*.conf;
+
+upstream backend {
+ server 192.168.5.32;
+ server 192.168.5.33;
+ sticky learn
+ create=$upstream_cookie_jsessionid
+ lookup=$cookie_jsessionid
+ zone=client_session:1m
+ timeout=1h;
+ }
+}
\ No newline at end of file
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.least" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.least"
new file mode 100644
index 0000000000000000000000000000000000000000..76b4f8b7ba26e43792b827162182edba2acffc61
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.least"
@@ -0,0 +1,58 @@
+
+user nginx;
+worker_processes auto;
+
+error_log /var/log/nginx/error.log notice;
+pid /var/run/nginx.pid;
+
+
+events {
+ worker_connections 1024;
+}
+
+
+http {
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ # '$status $body_bytes_sent "$http_referer" '
+ # '$hostname and $host'
+ # '$http_x_forwarded_for'
+ # '"$http_user_agent" "$http_x_forwarded_for"';
+ log_format main '“$time_local” client=$remote_addr '
+ 'method=$request_method request="$request” '
+ 'request_length=$request_length '
+ 'status=$status bytes_sent=$bytes_sent '
+ 'body_bytes_sent=$body_bytes_sent '
+ 'referer=$http_referer '
+ 'user_agent="$http_user_agent" '
+ 'host=$host'
+ 'xff=$http_x_forwarded_for'
+ 'upstream_addr=$upstream_addr '
+ 'upstream_status=$upstream_status '
+ 'request_time=$request_time '
+ 'upstream_response_time=$upstream_response_time '
+ 'upstream_connect_time=$upstream_connect_time '
+ 'upstream_header_time=$upstream_header_time';
+
+ access_log /var/log/nginx/access.log main;
+
+ sendfile on;
+ #tcp_nopush on;
+
+ keepalive_timeout 65;
+
+ #gzip on;
+
+ include /etc/nginx/conf.d/*.conf;
+
+upstream backend {
+ least_time last_byte;
+ zone backend 64k;
+ server 192.168.5.30;
+ server 192.168.5.32;
+ server 192.168.5.33;
+}
+}
+
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.random" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.random"
new file mode 100644
index 0000000000000000000000000000000000000000..967950beb2947a26034aedcc4ba21e5db17db0e5
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.random"
@@ -0,0 +1,57 @@
+
+user nginx;
+worker_processes auto;
+
+error_log /var/log/nginx/error.log notice;
+pid /var/run/nginx.pid;
+
+
+events {
+ worker_connections 1024;
+}
+
+http {
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ # '$status $body_bytes_sent "$http_referer" '
+ # '$hostname and $host'
+ # '$http_x_forwarded_for'
+ # '"$http_user_agent" "$http_x_forwarded_for"';
+ log_format main '“$time_local” client=$remote_addr '
+ 'method=$request_method request="$request” '
+ 'request_length=$request_length '
+ 'status=$status bytes_sent=$bytes_sent '
+ 'body_bytes_sent=$body_bytes_sent '
+ 'referer=$http_referer '
+ 'user_agent="$http_user_agent" '
+ 'host=$host'
+ 'xff=$http_x_forwarded_for'
+ 'upstream_addr=$upstream_addr '
+ 'upstream_status=$upstream_status '
+ 'request_time=$request_time '
+ 'upstream_response_time=$upstream_response_time '
+ 'upstream_connect_time=$upstream_connect_time '
+ 'upstream_header_time=$upstream_header_time';
+
+ access_log /var/log/nginx/access.log main;
+
+ sendfile on;
+ #tcp_nopush on;
+
+ keepalive_timeout 65;
+
+ #gzip on;
+
+ include /etc/nginx/conf.d/*.conf;
+
+upstream backend {
+ random two least_time=last_byte;
+ server 192.168.5.30;
+ server 192.168.5.32;
+ server 192.168.5.33;
+}
+}
+
+
diff --git "a/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.route" "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.route"
new file mode 100644
index 0000000000000000000000000000000000000000..3eae199882381c7b44844dcf869c5d763c89f307
--- /dev/null
+++ "b/3 NGINX\350\264\237\350\275\275\345\235\207\350\241\241\344\270\216\345\217\215\345\220\221\344\273\243\347\220\206/nginx/nginx.conf.route"
@@ -0,0 +1,59 @@
+
+user nginx;
+worker_processes auto;
+
+error_log /var/log/nginx/error.log notice;
+pid /var/run/nginx.pid;
+
+
+events {
+ worker_connections 1024;
+}
+
+http {
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ # '$status $body_bytes_sent "$http_referer" '
+ # '$hostname and $host'
+ # '$http_x_forwarded_for'
+ # '"$http_user_agent" "$http_x_forwarded_for"';
+ log_format main '“$time_local” client=$remote_addr '
+ 'method=$request_method request="$request” '
+ 'request_length=$request_length '
+ 'status=$status bytes_sent=$bytes_sent '
+ 'body_bytes_sent=$body_bytes_sent '
+ 'referer=$http_referer '
+ 'user_agent="$http_user_agent" '
+ 'host=$host'
+ 'xff=$http_x_forwarded_for'
+ 'upstream_addr=$upstream_addr '
+ 'upstream_status=$upstream_status '
+ 'request_time=$request_time '
+ 'upstream_response_time=$upstream_response_time '
+ 'upstream_connect_time=$upstream_connect_time '
+ 'upstream_header_time=$upstream_header_time';
+
+ access_log /var/log/nginx/access.log main;
+
+ sendfile on;
+ #tcp_nopush on;
+
+ keepalive_timeout 65;
+
+ #gzip on;
+
+ include /etc/nginx/conf.d/*.conf;
+
+
+map $cookie_jsessionid $route_cookie { ~.+\.(?P\w+)$ $route; }
+map $request_uri $route_uri { ~jsessionid=.+\.(?P\w+)$ $route; }
+upstream backend {
+ server 192.168.5.32 route=a;
+ server 192.168.5.33 route=b;
+ sticky route $route_cookie $route_uri;
+ }
+
+}
+
diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/.keep" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/.keep"
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/README.md" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/README.md"
new file mode 100644
index 0000000000000000000000000000000000000000..5aecfb92e5764d853e5999bfbb596eaf92c57410
--- /dev/null
+++ "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/README.md"
@@ -0,0 +1,512 @@
+# Nginx Plus集群管理
+
+#### Lab介绍
+
+本次Lab环境会使用3个Nginx Plus实例,实例部署的OS不限,可以是Docker环境,Ubuntu,Centos等,我的环境选用了Ubuntu 18.04.5,本章节将通过keepalived以及nginx-sync.sh实现Nginx Plus集群搭建、配置同步以及状态同步,本次Lab将包含:
+1. 主备集群搭建
+2. SSH免登陆操作
+3. 集群配置同步
+4. Sticky learn状态同步
+
+#### 准备工具
+
+1. keepalived
+```
+基于不同操作系统可以通过以下方式安装keepalived
+yum install -y keepalived
+apt-get install nginx-ha-keepalived
+
+keepalived安装完成后会增加在/etc/下的keepalived目录,keepalived守护进程将在目录下通过配置文件keepalived.conf进行Nginx Plus集群管理。
+
+我以ubuntu为例,接下来我们利用nginx-ha-setup快速生成Actvie/Passive的Nginx Plus集群。
+注:每个NginxPlus实例均要独立安装keepalived以及单独运行nginx-ha-keealived脚本
+```
+2. nginx-sync.sh
+```
+基于不同操作系统可以通过以下方式安装nginx-sync.sh
+sudo yum install nginx-sync
+sudo apt-get install nginx-sync
+
+Nginx Plus配置同步是利用nginx-sync.sh脚本实现从Primary节点同步相关nginx plus配置文件至其他节点,可根据在Master节点上的/etc/nginx-sync.conf,指定同步的内容以及同步的节点数。
+```
+
+#### Charter 1 主备集群搭建
+
+1. 当集群中的所有实例均完成keepalived的安装后,可以通过nginx-ha-setup脚本,以向导式快速搭建HA集群
+
+```
+#在节点A上执行nginx-ha-setup,将其设置成Master
+root@vms31:/etc/keepalived# nginx-ha-setup
+Thank you for using NGINX Plus!
+
+This script is intended for use with RHEL/CentOS/SLES/Debian/Ubuntu-based systems.
+It will configure highly available NGINX Plus environment in Active/Passive pair.
+
+NOTE: you will need the following in order to continue:
+ - 2 running systems (nodes) with static IP addresses
+ - one free IP address to use as Cluster IP endpoint
+
+It is strongly recommended to run this script simultaneously on both nodes,
+e.g. use two terminal windows and switch between them step by step.
+
+It is recommended to run this script under screen(1) in order to allow
+installation process to continue in case of unexpected session disconnect.
+
+Press to continue...
+
+Step 1: configuring internal management IP addresses.
+
+In order to communicate with each other, both nodes must have at least one IP address.
+
+The guessed primary IP of this node is: 192.168.5.31/24
+
+#这一步填写本机IP192.168.5.31/24
+
+Do you want to use this address for internal cluster communication? (y/n)
+IP address of this host is set to: 192.168.5.31/24
+Primary network interface: ens32
+
+Now please enter IP address of a second node: 192.168.5.32
+You entered: 192.168.5.32
+Is it correct? (y/n)
+IP address of the second node is set to: 192.168.5.32
+
+#这一步填写对端IP192.168.5.32
+
+Press to continue...
+
+Step 2: creating keepalived configuration
+
+Now you have to choose cluster IP address.
+This address will be used as en entry point to all your cluster resources.
+The chosen address must not be one already associated with a physical node.
+
+Enter cluster IP address: 192.168.5.100
+You entered: 192.168.5.100
+Is it correct? (y/n)
+
+#这一步填写集群IP192.168.5.100,该IP会随着Master状态的failover,一起failover。
+
+You must choose which node should have the MASTER role in this cluster.
+
+Please choose what the current node role is:
+1) MASTER
+2) BACKUP
+
+(on the second node you should choose the opposite variant)
+
+Press 1 or 2.
+This is the MASTER node.
+
+#这一步选择该节点的初始化状态,1为Master,2为Backup,这里选择1.
+
+Step 3: starting keepalived
+
+keepalived is already running.
+
+Press to continue...
+
+Step 4: configuring cluster
+
+Enabling keepalived and nginx at boot time...
+Initial configuration complete!
+
+keepalived logs are written to syslog and located here:
+/var/log/syslog
+
+Further configuration may be required according to your needs
+and environment.
+Main configuration file for keepalived can be found at:
+ /etc/keepalived/keepalived.conf
+
+To control keepalived, use 'service keepalived' command:
+ service keepalived status
+
+keepalived documentation can be found at:
+http://www.keepalived.org/
+
+NGINX-HA-keepalived documentation can be found at:
+/usr/share/doc/nginx-ha-keepalived/README
+
+Thank you for using NGINX Plus!
+
+```
+
+当完成上述向导设置后,脚本将会创建keepalived.conf在/etc/keepalived/目录下,接下来需要另外一个实例上运行nginx-ha-setup进行Backup的设置。
+
+```
+#在节点B上执行nginx-ha-setup,将其设置成Backup
+root@vms32:/etc/keepalived# nginx-ha-setup
+Thank you for using NGINX Plus!
+
+This script is intended for use with RHEL/CentOS/SLES/Debian/Ubuntu-based systems.
+It will configure highly available NGINX Plus environment in Active/Passive pair.
+
+NOTE: you will need the following in order to continue:
+ - 2 running systems (nodes) with static IP addresses
+ - one free IP address to use as Cluster IP endpoint
+
+It is strongly recommended to run this script simultaneously on both nodes,
+e.g. use two terminal windows and switch between them step by step.
+
+It is recommended to run this script under screen(1) in order to allow
+installation process to continue in case of unexpected session disconnect.
+
+Press to continue...
+
+Step 1: configuring internal management IP addresses.
+
+In order to communicate with each other, both nodes must have at least one IP address.
+
+The guessed primary IP of this node is: 192.168.5.32/24
+
+#这一步填写本机IP192.168.5.32/24
+
+Do you want to use this address for internal cluster communication? (y/n)
+IP address of this host is set to: 192.168.5.32/24
+Primary network interface: ens32
+
+Now please enter IP address of a second node: 192.168.5.31
+You entered: 192.168.5.31
+Is it correct? (y/n)
+IP address of the second node is set to: 192.168.5.31
+
+#这一步填写对端IP192.168.5.31
+
+Press to continue...
+
+Step 2: creating keepalived configuration
+
+Now you have to choose cluster IP address.
+This address will be used as en entry point to all your cluster resources.
+The chosen address must not be one already associated with a physical node.
+
+Enter cluster IP address: 192.168.5.100
+You entered: 192.168.5.100
+Is it correct? (y/n)
+
+#这一步填写集群IP192.168.5.100
+
+You must choose which node should have the MASTER role in this cluster.
+
+Please choose what the current node role is:
+1) MASTER
+2) BACKUP
+
+(on the second node you should choose the opposite variant)
+
+Press 1 or 2.
+This is the BACKUP node.
+
+#这一步选择该节点的初始化状态,选择2为Backup节点。
+
+Step 3: starting keepalived
+
+keepalived is already running.
+
+Press to continue...
+
+Step 4: configuring cluster
+
+Enabling keepalived and nginx at boot time...
+Initial configuration complete!
+
+keepalived logs are written to syslog and located here:
+/var/log/syslog
+
+Further configuration may be required according to your needs
+and environment.
+Main configuration file for keepalived can be found at:
+ /etc/keepalived/keepalived.conf
+
+To control keepalived, use 'service keepalived' command:
+ service keepalived status
+
+keepalived documentation can be found at:
+http://www.keepalived.org/
+
+NGINX-HA-keepalived documentation can be found at:
+/usr/share/doc/nginx-ha-keepalived/README
+
+Thank you for using NGINX Plus!
+```
+
+当完成上述向导设置后,脚本将会创建keepalived.conf在/etc/keepalived/目录下,这时候集群主备状态设置完成。
+
+2. 通过以下多种方式查看集群状态:
+
+```
+Ip addr show
+cat /var/run/nginx-ha-keepalived.state
+service keepalived status
+/var/log/messages --- CentOS, RHEL, and SLES‑based
+/var/log/syslog --- Ubuntu and Debian‑based
+service keepalived dump
+
+```
+
+本次通过cat /var/run/nginx-ha-keepalived.state方式查看实例状态
+```
+#主机
+root@vms31:/etc/keepalived# cat /var/run/nginx-ha-keepalived.state
+STATE=MASTER
+
+#备机
+root@vms32:/etc/keepalived# cat /var/run/nginx-ha-keepalived.state
+STATE=BACKUP
+```
+
+
+#### Charter 2 SSH免登陆
+
+由于Nginx-sync需要通过ssh到其他节点,执行相关命令如配置验证,reload nginx等。所以在完成nginx-sync安装后,需要提前设置免密码ssh登录,使得Master无需密码登录所有peer节点。
+
+1. 第一步,需要在Master节点上生成OpenSSH的密钥对:
+
+```
+root@vms31:/etc/keepalived# sudo ssh-keygen -t rsa -b 2048
+Generating public/private rsa key pair.
+Enter file in which to save the key (/root/.ssh/id_rsa):
+/root/.ssh/id_rsa already exists.
+Overwrite (y/n)? y
+Enter passphrase (empty for no passphrase):
+Enter same passphrase again:
+Your identification has been saved in /root/.ssh/id_rsa.
+Your public key has been saved in /root/.ssh/id_rsa.pub.
+The key fingerprint is:
+SHA256:1WyGXUDlm5XFC5rQEeqF5z3oGKpLMsDpW0vxFfEYu9s root@vms31.rhce.cc
+The key's randomart image is:
++---[RSA 2048]----+
+| o .+=oo..|
+| *.o* + +|
+| + ++oO o.o|
+|. . +.+=o = |
+| + . oSo o oo |
+|. . o . + + . |
+| . = o o E . |
+| + = . |
+| . . o. |
++----[SHA256]-----+
+
+
+root@vms31:/# cat /root/.ssh/id_rsa.pub
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDoSOaRM8+oCbssUzskLs02XtXw6ncpQ2hSip7Vg3Vbo0lfmk7sG3a5C9s0YXzGX7H2IUpNWrSWrKOrRva1kYt503dXJeE8sfrUKF95Ydh4a867tke1NtlumOcdtWfPQmb9im39bpR/pNteRLGlr7Izo5Cx7cy3bLvj+hheXhhD5NOib8FhiJyUmzqqx6ikOPSgxtzCcdN7eWrYpvFA2waP+1i9KYyXjl67IohqAwZ4XCX8kQ9oSnHaS1sNpEHxebehRoMeutmENCycVk8Dvqhw1HZnzo0FKNDwqWmAEkMfLxj7GBah3jmSe3rWpAeYFj6pIk+mK2rPZKz9KUati/WH root@vms31.rhce.cc
+```
+
+2. 第二步,在Backup节点上创建/root/.ssh
+
+```
+root@vms33:~#sudo mkdir /root/.ssh
+root@vms33:~#sudo echo 'from="192.168.5.31" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDoSOaRM8+oCbssUzskLs02XtXw6ncpQ2hSip7Vg3Vbo0lfmk7sG3a5C9s0YXzGX7H2IUpNWrSWrKOrRva1kYt503dXJeE8sfrUKF95Ydh4a867tke1NtlumOcdtWfPQmb9im39bpR/pNteRLGlr7Izo5Cx7cy3bLvj+hheXhhD5NOib8FhiJyUmzqqx6ikOPSgxtzCcdN7eWrYpvFA2waP+1i9KYyXjl67IohqAwZ4XCX8kQ9oSnHaS1sNpEHxebehRoMeutmENCycVk8Dvqhw1HZnzo0FKNDwqWmAEkMfLxj7GBah3jmSe3rWpAeYFj6pIk+mK2rPZKz9KUati/WH root@vms31.rhce.cc' >> /root/.ssh/authorized_keys
+
+#将在Master节点上生成的SSH公钥,通过echo方式,写入至/root/.ssh/authorized_keys,并且指定了源为192.168.5.31
+```
+
+3. 第三步,在Backup节点添加PermitRootLogin without-password至/etc/ssh/sshd_config
+
+```
+root@vms33:~/.ssh# vi /etc/ssh/sshd_config
+# $OpenBSD: sshd_config,v 1.101 2017/03/14 07:19:07 djm Exp $
+
+# This is the sshd server system-wide configuration file. See
+# sshd_config(5) for more information.
+
+# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin
+
+# The strategy used for options in the default sshd_config shipped with
+# OpenSSH is to specify options with their default value where
+# possible, but leave them commented. Uncommented options override the
+# default value.
+
+#Port 22
+#AddressFamily any
+#ListenAddress 0.0.0.0
+#ListenAddress ::
+
+#HostKey /etc/ssh/ssh_host_rsa_key
+#HostKey /etc/ssh/ssh_host_ecdsa_key
+#HostKey /etc/ssh/ssh_host_ed25519_key
+
+# Ciphers and keying
+#RekeyLimit default none
+
+# Logging
+#SyslogFacility AUTH
+#LogLevel INFO
+
+# Authentication:
+
+#LoginGraceTime 2m
+#PermitRootLogin prohibit-password
+PermitRootLogin yes
+PermitRootLogin without-password
+
+#添加PermitRootLogin without-password,以实现免登陆。
+
+#StrictModes yes
+#MaxAuthTries 6
+#MaxSessions 10
+
+#PubkeyAuthentication yes
+
+# Expect .ssh/authorized_keys2 to be disregarded by default in future.
+#AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2
+
+#AuthorizedPrincipalsFile none
+
+#AuthorizedKeysCommand none
+```
+
+4. 最后在backup节点上sudo service ssh reload
+
+5. 验证从Master节点是否成功免登陆ssh到Backup节点。
+
+
+#### Charter 3 集群配置同步
+
+在完成Charter 2的SSH免登陆后,还需要对同步即可在Master上创建配置同步.conf,在/etc/下创建nginx-sync.conf.
+```
+root@vms31:/# vi /etc/nginx-sync.conf
+NODES="192.168.5.33"
+CONFPATHS="/etc/nginx/nginx.conf /etc/nginx/conf.d"
+EXCLUDE="default.conf"
+```
+通用参数说明
+
+NODES:配置同步的目标节点,使用空格或换行符分隔。
+
+CONFPATHS:需同步的文件或者目录,使用空格或换行符分隔。
+
+EXCLUDE:不进行同步的文件名,使用空格或换行符分隔。
+
+详细设置请查看https://docs.nginx.com/nginx/admin-guide/high-availability/configuration-sharing/
+
+最终在Master节点上执行nginx-sync.sh,查看配置是否成功同步。
+
+
+#### Charter 4 Sticky learn状态同步
+
+Nginx Plus实例在集群中可以共享其状态信息,具体可共享状态信息如下:
+
+Sticky learn会话保持信息
+
+Request limiting限速
+
+Key-value storage
+
+所有Nginx Plus实例可以共享状态信息至集群中的其他成员,通过共享内存中的Zone名实现共享。
+
+本章节以sticky learn状态信息同步为例,演示会话保持信息同步。
+
+1. 准备两台web服务器修改两台Nginx plus Web服务器中的配置。
+```
+#web1服务器:vi /etc/nginx/conf.d/web1.conf
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ add_header Set-Cookie "jsessionid=1111aaaabbbb2222.a";
+ }
+}
+
+
+
+
+#web2服务器:vi /etc/nginx/conf.d/web2.conf
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ add_header Set-Cookie "jsessionid=5555xxxxyyyy6666.b";
+ }
+}
+```
+
+2. 修改需要进行状态同步的两个Nginx Plus实例/etc/nginx/nginx.conf配置文件
+```
+First
+#Master节点监听9000端口,用作接收同步信息,同时,本实例的同步对象为192.168.5.33,在http block前加入以下stream block:
+stream {
+ server {
+ listen 9000;
+ zone_sync;
+ zone_sync_server 192.168.5.33:9000;
+ }
+}
+
+#在http block中使用以下upstream配置,可以看出配置类似sticky learn,唯一区别是在最后加入sync。
+
+upstream backend {
+ server 192.168.5.32;
+ server 192.168.5.33;
+ sticky learn
+ create=$upstream_cookie_jsessionid
+ lookup=$cookie_jsessionid
+ zone=client_session:1m
+ timeout=1h
+ sync;
+ }
+
+
+
+
+Second
+#然后在另外一个同样配置监听9000端口,用作接收同步信息,同时,本实例的同步对象为192.168.5.31,在http block前加入以下stream block:
+stream {
+ server {
+ listen 9000;
+ zone_sync;
+ zone_sync_server 192.168.5.31:9000;
+ }
+}
+
+#同样在http block中使用以下upstream配置,可以看出配置类似sticky learn,唯一区别是在最后加入sync。
+
+upstream backend {
+ server 192.168.5.32;
+ server 192.168.5.33;
+ sticky learn
+ create=$upstream_cookie_jsessionid
+ lookup=$cookie_jsessionid
+ zone=client_session:1m
+ timeout=1h
+ sync;
+ }
+```
+
+3. 为了进一步验证同步结果,需要打开API以及dashboard,故通过修改/etc/nginx/conf.d/share.conf
+```
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://backend;
+ }
+ location = /dashboard.html {
+ root /usr/share/nginx/html;
+ }
+
+ location /api {
+ api write=on;
+ }
+}
+```
+
+4. 通过浏览器访问http://实例ip,生成1条sticky learn记录,并通过curl命令查看两个实例是否都有对应记录。
+```
+curl -s '127.0.0.1/api/6/stream/zone_sync' | jq
+```
+
diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/.keep" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/.keep"
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web1.conf" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web1.conf"
new file mode 100644
index 0000000000000000000000000000000000000000..64e8c7d20d6a95b3716032a65d3a3caef4581f76
--- /dev/null
+++ "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web1.conf"
@@ -0,0 +1,14 @@
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ add_header Set-Cookie "jsessionid=1111aaaabbbb2222.a";
+ }
+}
+
diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web2.conf" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web2.conf"
new file mode 100644
index 0000000000000000000000000000000000000000..ac5fa92c5275edb3ac6afe2a5eedb3cf33491201
--- /dev/null
+++ "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web2.conf"
@@ -0,0 +1,14 @@
+server {
+ listen 80 default_server;
+ server_name localhost;
+
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ add_header Set-Cookie "jsessionid=5555xxxxyyyy6666.b";
+ }
+}
+
diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/.keep" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/.keep"
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.backup" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.backup"
new file mode 100644
index 0000000000000000000000000000000000000000..d018762152a8a79893215ef207c23b40c58a2475
--- /dev/null
+++ "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.backup"
@@ -0,0 +1,68 @@
+
+user nginx;
+worker_processes auto;
+
+error_log /var/log/nginx/error.log notice;
+pid /var/run/nginx.pid;
+
+
+events {
+ worker_connections 1024;
+}
+
+stream {
+ server {
+ listen 9000;
+ zone_sync;
+ zone_sync_server 192.168.5.31:9000;
+ }
+}
+
+
+http {
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ # '$status $body_bytes_sent "$http_referer" '
+ # '$hostname and $host'
+ # '$http_x_forwarded_for'
+ # '"$http_user_agent" "$http_x_forwarded_for"';
+ log_format main '“$time_local” client=$remote_addr '
+ 'method=$request_method request="$request” '
+ 'request_length=$request_length '
+ 'status=$status bytes_sent=$bytes_sent '
+ 'body_bytes_sent=$body_bytes_sent '
+ 'referer=$http_referer '
+ 'user_agent="$http_user_agent" '
+ 'host=$host'
+ 'xff=$http_x_forwarded_for'
+ 'upstream_addr=$upstream_addr '
+ 'upstream_status=$upstream_status '
+ 'request_time=$request_time '
+ 'upstream_response_time=$upstream_response_time '
+ 'upstream_connect_time=$upstream_connect_time '
+ 'upstream_header_time=$upstream_header_time';
+
+ access_log /var/log/nginx/access.log main;
+
+ sendfile on;
+ #tcp_nopush on;
+
+ keepalive_timeout 65;
+
+ #gzip on;
+
+ include /etc/nginx/conf.d/*.conf;
+
+upstream backend {
+ server 192.168.5.32;
+ server 192.168.5.33;
+ sticky learn
+ create=$upstream_cookie_jsessionid
+ lookup=$cookie_jsessionid
+ zone=client_session:1m
+ timeout=1h
+ sync;
+ }
+}
diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.master" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.master"
new file mode 100644
index 0000000000000000000000000000000000000000..5f7307d2b6cf74b41ef93716487fb1fd4bf823a5
--- /dev/null
+++ "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.master"
@@ -0,0 +1,68 @@
+
+user nginx;
+worker_processes auto;
+
+error_log /var/log/nginx/error.log notice;
+pid /var/run/nginx.pid;
+
+
+events {
+ worker_connections 1024;
+}
+
+stream {
+ server {
+ listen 9000;
+ zone_sync;
+ zone_sync_server 192.168.5.33:9000;
+ }
+}
+
+
+http {
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ # '$status $body_bytes_sent "$http_referer" '
+ # '$hostname and $host'
+ # '$http_x_forwarded_for'
+ # '"$http_user_agent" "$http_x_forwarded_for"';
+ log_format main '“$time_local” client=$remote_addr '
+ 'method=$request_method request="$request” '
+ 'request_length=$request_length '
+ 'status=$status bytes_sent=$bytes_sent '
+ 'body_bytes_sent=$body_bytes_sent '
+ 'referer=$http_referer '
+ 'user_agent="$http_user_agent" '
+ 'host=$host'
+ 'xff=$http_x_forwarded_for'
+ 'upstream_addr=$upstream_addr '
+ 'upstream_status=$upstream_status '
+ 'request_time=$request_time '
+ 'upstream_response_time=$upstream_response_time '
+ 'upstream_connect_time=$upstream_connect_time '
+ 'upstream_header_time=$upstream_header_time';
+
+ access_log /var/log/nginx/access.log main;
+
+ sendfile on;
+ #tcp_nopush on;
+
+ keepalive_timeout 65;
+
+ #gzip on;
+
+ include /etc/nginx/conf.d/*.conf;
+
+upstream backend {
+ server 192.168.5.32;
+ server 192.168.5.33;
+ sticky learn
+ create=$upstream_cookie_jsessionid
+ lookup=$cookie_jsessionid
+ zone=client_session:1m
+ timeout=1h
+ sync;
+ }
+}