根据实际情况进行修改
KERNEL=="sdb",PROGRAM=="/sbin/scsi_id -g -u /dev/sdb",RESULT=="36000c2900709abf31cbb677505e08064",NAME="asm_data",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc",PROGRAM=="/sbin/scsi_id -g -u /dev/sdc",RESULT=="36000c29258759d60b942ff5a2cbb37e1",NAME="asm_orc",OWNER="grid",GROUP="asmadmin",MODE="0660"
/sbin/start_udev
#yum install bind
开启启动
#chkconfig named on
RedHat 7之后执行
#systemctl enable named.service
如下:
options {
directory "/var/named"; // Base directory for named
allow-transfer {"none";}; // Slave serves that can pull zone transfer. Ban everyone by
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/named.rfc1912.zones";
zone "1.168.192.IN-ADDR.ARPA." IN { // Reverse zone.
type master;
notify no;
file "192.168.1.zone";
};
zone "0.10.10.IN-ADDR.ARPA." IN { // Reverse zone.
type master;
notify no;
file "10.10.0.zone";
};
zone "hoptoad.com." IN {
type master;
notify no;
file "hoptoad.com.zone";
};
编辑/var/named/hoptoad.com. zone文件,正向解析
如下:
$TTL 1H ; Time to live
$ORIGIN hoptoad.com.
@ IN SOA rac1.hoptoad.com root.rac1.hoptoad.com. (
2013011201 ; serial// (todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour
@ IN NS rac1
;
IN A 192.168.1.100
rac-scan IN A 192.168.1.120
rac1 IN A 192.168.1.100
rac2 IN A 192.168.1.101
rac3 IN A 192.168.1.102
rac1-priv IN A 10.10.0.100
rac2-priv IN A 10.10.0.101
rac3-priv IN A 10.10.0.102
rac1-vip IN A 192.168.1.110
rac2-vip IN A 192.168.1.111
rac3-vip IN A 192.168.1.112
;
$ORIGIN hoptoad.com.
hoptoad.com. IN NS hoptoad.com.
编辑cat /var/named/192.168.1. zone文件反向解析
$TTL 1H
@ IN SOA rac1.hoptoad.com root.rac1.hoptoad.com. (
2013011201 ; serial //(todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour
;
NS rac1.hoptoad.com.
110 IN PTR rac-scan.hoptoad.com.
100 IN PTR rac1.hoptoad.com.
101 IN PTR rac2.hoptoad.com.
102 IN PTR rac3.hoptoad.com.
110 IN PTR rac1-vip.hoptoad.com.
111 IN PTR rac2-vip.hoptoad.com.
112 IN PTR rac3-vip.hoptoad.com.
@ IN SOA rac1.hoptoad.com root.rac1.hoptoad.com. (
2013011201 ; serial //(todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour
;
NS rac1.hoptoad.com.
110 IN PTR rac-scan.hoptoad.com.
100 IN PTR rac1.hoptoad.com.
101 IN PTR rac2.hoptoad.com.
102 IN PTR rac3.hoptoad.com.
110 IN PTR rac1-vip.hoptoad.com.
111 IN PTR rac2-vip.hoptoad.com.
112 IN PTR rac3-vip.hoptoad.com.
编辑cat /var/named/10.10.0.zone文件反向解析
$TTL 1H
@ IN SOA rac1.hoptoad.com root.rac1.hoptoad.com. (
2013011201 ; serial //(todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour
;
NS rac1.hoptoad.com.
100 IN PTR rac1-priv.hoptoad.com.
101 IN PTR rac2-priv.hoptoad.com.
102 IN PTR rac3-priv.hoptoad.com.
options attempts: 2
options timeout: 1
search hoptoad.com
nameserver 192.168.1.100
//注:出错日志在/var/log/messages文件中
注意:NS 那行之间需要有TAB键不然报错。
10.4.5 检测DNS
确保每个主机名都要能解析IP 地址
[root@rac1 named]# nslookup rac1.hoptoad.com
安装GRID包中的 cvuqdisk-1.0.9-1.rpm RPM包(各个节点都安装)
执行如下命令:
./runcluvfy.sh stage -post hwos -n slave1,slave2,slave3 -verbose
./runcluvfy.sh stage -post hwos -n rac1,rac2,rac3 -verbose
./runcluvfy.sh stage -post hwos -n rac1,rac2 -verbose
[grid@slave1 grid]$ ./runcluvfy.sh stage -post hwos -n slave1,slave2,slave3 -verbose
出现结果如下(环境不同略有不同):
Performing post-checks for hardware and operating system setup
Checking node reachability...
Check: Node reachability from node "slave1"
Destination Node Reachable?
------------------------------------ ------------------------
slave1 yes
slave2 yes
slave3 yes
Result: Node reachability check passed from node "slave1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
slave2 passed
slave1 passed
slave3 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
slave1 passed
slave2 passed
slave3 passed
Verification of the hosts config file successful
Interface information for node "slave1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.1.201 192.168.1.0 0.0.0.0 UNKNOWN 00:0C:29:57:A6:F2 1500
eth1 10.10.0.201 10.10.0.0 0.0.0.0 UNKNOWN 00:0C:29:57:A6:FC 1500
Interface information for node "slave2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.1.202 192.168.1.0 0.0.0.0 UNKNOWN 00:0C:29:72:2C:6E 1500
eth1 10.10.0.202 10.10.0.0 0.0.0.0 UNKNOWN 00:0C:29:72:2C:78 1500
Interface information for node "slave3"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.1.203 192.168.1.0 0.0.0.0 UNKNOWN 00:0C:29:40:A1:FC 1500
eth1 10.10.0.203 10.10.0.0 0.0.0.0 UNKNOWN 00:0C:29:40:A1:06 1500
Check: Node connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
slave1[192.168.1.201] slave2[192.168.1.202] yes
slave1[192.168.1.201] slave3[192.168.1.203] yes
slave2[192.168.1.202] slave3[192.168.1.203] yes
Result: Node connectivity passed for subnet "192.168.1.0" with node(s) slave1,slave2,slave3
Check: TCP connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
slave1:192.168.1.201 slave2:192.168.1.202 passed
slave1:192.168.1.201 slave3:192.168.1.203 passed
Result: TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity of subnet "10.10.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
slave1[10.10.0.201] slave2[10.10.0.202] yes
slave1[10.10.0.201] slave3[10.10.0.203] yes
slave2[10.10.0.202] slave3[10.10.0.203] yes
Result: Node connectivity passed for subnet "10.10.0.0" with node(s) slave1,slave2,slave3
Check: TCP connectivity of subnet "10.10.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
slave1:10.10.0.201 slave2:10.10.0.202 passed
slave1:10.10.0.201 slave3:10.10.0.203 passed
Result: TCP connectivity check passed for subnet "10.10.0.0"
Interfaces found on subnet "192.168.1.0" that are likely candidates for a private interconnect are:
slave1 eth0:192.168.1.201
slave2 eth0:192.168.1.202
slave3 eth0:192.168.1.203
Interfaces found on subnet "10.10.0.0" that are likely candidates for a private interconnect are:
slave1 eth1:10.10.0.201
slave2 eth1:10.10.0.202
slave3 eth1:10.10.0.203
WARNING:
Could not find a suitable set of interfaces for VIPs
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "10.10.0.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Time zone consistency
Result: Time zone consistency check passed
Checking shared storage accessibility...
WARNING:
slave3:Cannot verify the shared state for device /dev/sda1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:
slave1,slave2,slave3
WARNING:
slave3:Cannot verify the shared state for device /dev/sda2 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:
slave1,slave2,slave3
WARNING:
slave3:Cannot verify the shared state for device /dev/sda3 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:
slave1,slave2,slave3
Disk Sharing Nodes (3 in count)
------------------------------------ ------------------------
/dev/sdb slave1 slave2 slave3
Disk Sharing Nodes (3 in count)
------------------------------------ ------------------------
/dev/sdc slave1 slave2 slave3
Shared storage check was successful on nodes "slave1,slave2,slave3"
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Post-check for hardware and operating system setup was successful.
最后有successful 显示,完毕。
./runcluvfy.sh stage -pre crsinst -n slave1,slave2,slave3
./runcluvfy.sh stage -pre crsinst -n rac1,rac2,rac3
./runcluvfy.sh stage -pre crsinst -n rac1,rac2