(7).资源黏性
资源黏性是指:资源更倾向于运行在哪个节点。
资源黏性值范围及其作用:
0:这是默认选项。资源放置在系统中的最适合位置。这意味着当负载能力“较好”或较差的节点变得可用时才转移资源。此选项的作用基本等同于自动故障回复,只是资源可能会转移到非之前活动的节点上;
大于0:资源更愿意留在当前位置,但是如果有更合适的节点可用时会移动。值越高表示资源越愿意留在当前位置;
小于0:资源更愿意移离当前位置。绝对值越高表示资源越愿意离开当前位置;
INFINITY:如果不是因节点不适合运行资源(节点关机、节点待机、达到migration-threshold 或配置更改)而强制资源转移,资源总是留在当前位置。此选项的作用几乎等同于完全禁用自动故障回复;
-INFINITY:资源总是移离当前位置;
我们这里可以通过以下方式为资源指定默认黏性值: rsc_defaults resource-stickiness=100
crm(live)configure# rsc_defaults resource-stickiness=100
crm(live)configure# verify
crm(live)configure# show
node node1.test.com
node node2.test.com
primitive vip ocf:heartbeat:IPaddr \
params ip="192.168.18.200" nic="eth0" cidr_netmask="24"
property $id="cib-bootstrap-options" \
dc-version="1.1.8-7.el6-394e906" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
crm(live)configure# commit
(8).结合上面已经配置好的IP地址资源,将此集群配置成为一个active/passive模型的web(httpd)服务集群
Node1:
[root@node1 ~]# yum -y install httpd
[root@node1 ~]# echo "<h1>node1.magedu.com</h1>" > /var/www/html/index.html
Node2:
[root@node2 ~]# yum -y install httpd
[root@node2 ~]# echo "<h1>node2.magedu.com</h1>" > /var/www/html/index.html
测试一下:
node1:
node2:
而后在各节点手动启动httpd服务,并确认其可以正常提供服务。接着使用下面的命令停止httpd服务,并确保其不会自动启动(在两个节点各执行一遍):
node1:
[root@node1 ~]# /etc/init.d/httpd stop
[root@node1 ~]# chkconfig httpd off
node2:
[root@node2~]# /etc/init.d/httpd stop
[root@node2 ~]# chkconfig httpd off
接下来我们将此httpd服务添加为集群资源。将httpd添加为集群资源有两处资源代理可用:lsb和ocf:heartbeat,为了简单起见,我们这里使用lsb类型:
首先可以使用如下命令查看lsb类型的httpd资源的语法格式:
crm(live)# ra
crm(live)ra# info lsb:httpd
start and stop Apache HTTP Server (lsb:httpd)
The Apache HTTP Server is an efficient and extensible \
server implementing the current HTTP standards.
Operations' defaults (advisory minimum):
start timeout=15
stop timeout=15
status timeout=15
restart timeout=15
force-reload timeout=15
monitor timeout=15 interval=15
接下来新建资源httpd:
crm(live)# configure
crm(live)configure# primitive httpd lsb:httpd
crm(live)configure# show
node node1.test.com
node node2.test.com
primitive httpd lsb:httpd
primitive vip ocf:heartbeat:IPaddr \
params ip="192.168.18.200" nic="eth0" cidr_netmask="24"
property $id="cib-bootstrap-options" \
dc-version="1.1.8-7.el6-394e906" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
crm(live)configure# verify
crm(live)configure# commit
来查看一下资源状态
[root@node1 ~]# crm status
Cannot change active directory to /var/lib/pacemaker/cores/root: No such file or directory (2)
Last updated: Thu Aug 15 14:55:04 2013
Last change: Thu Aug 15 14:54:14 2013 via cibadmin on node1.test.com
Stack: classic openais (with plugin)
Current DC: node2.test.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
2 Resources configured.
Online: [ node1.test.com node2.test.com ]
vip (ocf::heartbeat:IPaddr): Started node2.test.com
httpd (lsb:httpd): Started node1.test.com
从上面的信息中可以看出vip和httpd有可能会分别运行于两个节点上,这对于通过此IP提供Web服务的应用来说是不成立的,即此两者资源必须同时运行在某节点上。有两种方法可以解决,一种是定义组资源,将vip与httpd同时加入一个组中,可以实现将资源运行在同节点上,另一种是定义资源约束可实现将资源运行在同一节点上。我们先来说每一种方法,定义组资源。
(9).定义组资源
crm(live)# configure
crm(live)configure# group webservice vip httpd
crm(live)configure# show
node node1.test.com
node node2.test.com
primitive httpd lsb:httpd
primitive vip ocf:heartbeat:IPaddr \
params ip="192.168.18.200" nic="eth0" cidr_netmask="24"
group webservice vip httpd
property $id="cib-bootstrap-options" \
dc-version="1.1.8-7.el6-394e906" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
crm(live)configure# verify
crm(live)configure# commit
再次查看一下资源状态
[root@node1 ~]# crm status
Cannot change active directory to /var/lib/pacemaker/cores/root: No such file or directory (2)
Last updated: Thu Aug 15 15:33:09 2013
Last change: Thu Aug 15 15:32:28 2013 via cibadmin on node1.test.com
Stack: classic openais (with plugin)
Current DC: node2.test.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
2 Resources configured.
Online: [ node1.test.com node2.test.com ]
Resource Group: webservice
vip (ocf::heartbeat:IPaddr): Started node2.test.com
httpd (lsb:httpd): Started node2.test.com
大家可以看到,所有资源全部运行在node2上,下面我们来测试一下
下面我们模拟故障,测试一下
crm(live)# node
crm(live)node#
? cd end help online show status-attr
attribute clearstate exit list quit standby up
bye delete fence maintenance ready status utilization
crm(live)node# standby
[root@node1 ~]# crm status
Cannot change active directory to /var/lib/pacemaker/cores/root: No such file or directory (2)
Last updated: Thu Aug 15 15:39:05 2013
Last change: Thu Aug 15 15:38:57 2013 via crm_attribute on node2.test.com
Stack: classic openais (with plugin)
Current DC: node2.test.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
2 Resources configured.
Node node2.test.com: standby
Online: [ node1.test.com ]
Resource Group: webservice
vip (ocf::heartbeat:IPaddr): Started node1.test.com
httpd (lsb:httpd): Started node1.test.com
[root@node1 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:91:45:90
inet addr:192.168.18.201 Bcast:192.168.18.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe91:4590/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:408780 errors:0 dropped:0 overruns:0 frame:0
TX packets:323137 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:60432533 (57.6 MiB) TX bytes:57541647 (54.8 MiB)
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:91:45:90
inet addr:192.168.18.200 Bcast:192.168.18.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:6525 errors:0 dropped:0 overruns:0 frame:0
TX packets:6525 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:882555 (861.8 KiB) TX bytes:882555 (861.8 KiB)
[root@node1 ~]# netstat -ntulp | grep :80
tcp 0 0 :::80 :::* LISTEN 16603/httpd
大家可以看到时,当node2节点设置为standby时,所有资源全部切换到node1上,下面我们再来访问一下Web页面
好了,组资源的定义与说明,我们就先演示到这,下面我们来说一说怎么定义资源约束。
(10).定义资源约束
我们先让node2上线,再删除组资源
crm(live)node# online
[root@node1 ~]# crm_mon
Last updated: Thu Aug 15 15:48:38 2013
Last change: Thu Aug 15 15:46:21 2013 via crm_attribute on node2.test.com
Stack: classic openais (with plugin)
Current DC: node2.test.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
2 Resources configured.
Online: [ node1.test.com node2.test.com ]
Resource Group: webservice
vip (ocf::heartbeat:IPaddr): Started node1.test.com
httpd (lsb:httpd): Started node1.test.com
删除组资源操作
crm(live)# resource
crm(live)resource# show
Resource Group: webservice
vip (ocf::heartbeat:IPaddr): Started
httpd (lsb:httpd): Started
crm(live)resource# stop webservice #停止资源
crm(live)resource# show
Resource Group: webservice
vip (ocf::heartbeat:IPaddr): Stopped
httpd (lsb:httpd): Stopped
crm(live)resource# cleanup webservice #清理资源
Cleaning up vip on node1.test.com
Cleaning up vip on node2.test.com
Cleaning up httpd on node1.test.com
Cleaning up httpd on node2.test.com
Waiting for 1 replies from the CRMd. OK
crm(live)# configure
crm(live)configure# delete
cib-bootstrap-options node1.test.com rsc-options webservice
httpd node2.test.com vip
crm(live)configure# delete webservice #删除组资源
crm(live)configure# show
node node1.test.com
node node2.test.com \
attributes standby="off"
primitive httpd lsb:httpd
primitive vip ocf:heartbeat:IPaddr \
params ip="192.168.18.200" nic="eth0" cidr_netmask="24"
property $id="cib-bootstrap-options" \
dc-version="1.1.8-7.el6-394e906" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
last-lrm-refresh="1376553277"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
crm(live)configure# commit
[root@node1 ~]# crm_mon
Last updated: Thu Aug 15 15:56:59 2013
Last change: Thu Aug 15 15:56:12 2013 via cibadmin on node1.test.com
Stack: classic openais (with plugin)
Current DC: node2.test.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
2 Resources configured.
Online: [ node1.test.com node2.test.com ]
vip (ocf::heartbeat:IPaddr): Started node1.test.com
httpd (lsb:httpd): Started node2.test.com
大家可以看到资源又重新运行在两个节点上了,下面我们来定义约束!使资源运行在同一节点上。首先我们来回忆一下资源约束的相关知识,资源约束则用以指定在哪些群集节点上运行资源,以何种顺序装载资源,以及特定资源依赖于哪些其它资源。pacemaker共给我们提供了三种资源约束方法:
Resource Location(资源位置):定义资源可以、不可以或尽可能在哪些节点上运行;
Resource Collocation(资源排列):排列约束用以定义集群资源可以或不可以在某个节点上同时运行;
Resource Order(资源顺序):顺序约束定义集群资源在节点上启动的顺序;
定义约束时,还需要指定分数。各种分数是集群工作方式的重要组成部分。其实,从迁移资源到决定在已降级集群中停止哪些资源的整个过程是通过以某种方式修改分数来实现的。分数按每个资源来计算,资源分数为负的任何节点都无法运行该资源。在计算出资源分数后,集群选择分数最高的节点。INFINITY(无穷大)目前定义为 1,000,000。加减无穷大遵循以下3个基本规则:
任何值 + 无穷大 = 无穷大
任何值 - 无穷大 = -无穷大
无穷大 - 无穷大 = -无穷大
定义资源约束时,也可以指定每个约束的分数。分数表示指派给此资源约束的值。分数较高的约束先应用,分数较低的约束后应用。通过使用不同的分数为既定资源创建更多位置约束,可以指定资源要故障转移至的目标节点的顺序。