Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<proxy balancer://lbcluster1>
BalancerMember :8080 loadfactor=10 route=TomcatA
BalancerMember :8080 loadfactor=10 route=TomcatB
ProxySet stickysession=ROUTEID
</proxy>
测试后发现session绑定成功
使用ajp连接:只需要修改两行
#Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
注释上面一行是因为使用ajp协议的话只需要ProxySet stickysession=ROUTEID一条语句即可绑定
<proxy balancer://lbcluster1>
BalancerMember ajp://172.16.100.68:8009 loadfactor=10 route=TomcatA
BalancerMember ajp://172.16.100.69:8009 loadfactor=10 route=TomcatB
ProxySet stickysession=ROUTEID
</proxy>
方案三:使用mod_jk作反向代理,使用mod_jk后端连接只能使用ajp协议
tar xf tomcat-connectors-1.2.40-src.tar.gz
cd tomcat-connectors-1.2.40-src/native
准备编译环境:
yum install httpd-devel gcc glibc-devel
yum groupinstall "Development tools"
mod_jk依赖于apxs
[root@node3 rpm]# which apxs
/usr/sbin/apxs
在native目录下:
./configure --with-apxs=/usr/sbin/apxs
装载mod_jk模块:在httpd.conf文件中:
LoadModule jk_module modules/mod_jk.so
查看是否装载成功:
[root@node3 conf]# httpd -M | grep jk
Syntax OK
jk_module (shared)
配置jk_module属性:httpd.conf中
JkWorkersFile /etc/httpd/conf.d/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel debug
JkMount /* TomcatA #此处的TomcatA必须与后端Tomcat的engine中定义哪个TomcatA的一致
JkMount /status/ stat1
创建/etc/httpd/conf.d/workers.properties
worker.list=TomcatA,stat1
worker.TomcatA.port=8009
worker.TomcatA.host=192.168.20.1
worker.TomcatA.type=ajp13
worker.TomcatA.lbfactor=1
worker.stat1.type = status
访问mod_jk自带的status页面:此页面也有管理功能
改为负载均衡:并且会话绑定功能
修改httpd.conf:
JkWorkersFile /etc/httpd/conf.d/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel debug
JkMount /* lbcluster1
JkMount /jkstatus/ stat1
修改/etc/httpd/conf.d/workers.properties
worker.list = lbcluster1,stat1
worker.TomcatA.type = ajp13
worker.TomcatA.host = 192.168.20.1
worker.TomcatA.port = 8009
worker.TomcatA.lbfactor = 5
worker.TomcatB.type = ajp13
worker.TomcatB.host = 192.168.20.2
worker.TomcatB.port = 8009
worker.TomcatB.lbfactor = 5
worker.lbcluster1.type = lb
worker.lbcluster1.sticky_session = 1
worker.lbcluster1.balance_workers = TomcatA, TomcatB
worker.stat1.type = status
测试成功
介绍:proxy-balancer-manager模块页面的使用
<proxy balancer://lbcluster1>
BalancerMember :8080 loadfactor=10 route=TomcatA
BalancerMember :8080 loadfactor=10 route=TomcatB
ProxySet stickysession=ROUTEID
</proxy>
<VirtualHost *:80>
ServerName web1.lee.com
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
<Location /balancer-manager>
SetHandler balancer-manager
ProxyPass !
Order Deny,Allow
Allow from all
</Location>
<Proxy *>
Order Deny,Allow
Allow from all
</Proxy>
ProxyPass /status !
ProxyPass / balancer://lbcluster1/
ProxyPassReverse / balancer://lbcluster1/
<Location />
Order Deny,Allow
Allow from all
</Location>
</VirtualHost>
测试:
delta-manager实现会话复制集群实现