rac10g01节点:
[root@rac10g01 ~]# sh /u01/Oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/oracle/oraInventory to 770.
Changing groupname of /u01/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@rac10g01 ~]# sh /u01/oracle/product/10.2.0.1/
crs_1/ db_1/
[root@rac10g01 ~]# sh /u01/oracle/product/10.2.0.1/crs_1/root.sh
WARNING: directory '/u01/oracle/product/10.2.0.1' is not owned by root
WARNING: directory '/u01/oracle/product' is not owned by root
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle/product/10.2.0.1' is not owned by root
WARNING: directory '/u01/oracle/product' is not owned by root
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac10g01 rac10g01-priv rac10g01
node 2: rac10g02 rac10g02-priv rac10g02
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac10g01
CSS is inactive on these nodes.
rac10g02
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac10g01 ~]#
rac10g02节点:
[root@rac10g02 ~]# sh /u01/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/oracle/oraInventory to 770.
Changing groupname of /u01/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@rac10g02 ~]#
在rac10g02节点执行root.sh之前要修改vipca和srvctl文件的内容(两个节点都要修改)
[oracle@rac10g02 ~]$ vim /u01/oracle/product/10.2.0.1/crs_1/bin/vipca
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL --在下面添加红色的内容
unset LD_ASSUME_KERNEL
[oracle@rac10g02 ~]$ vim /u01/oracle/product/10.2.0.1/crs_1/bin/srvctl
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL --在下面添加红色的内容
unset LD_ASSUME_KERNEL
[root@rac10g02 ~]# sh /u01/oracle/product/10.2.0.1/crs_1/root.sh
WARNING: directory '/u01/oracle/product/10.2.0.1' is not owned by root
WARNING: directory '/u01/oracle/product' is not owned by root
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle/product/10.2.0.1' is not owned by root
WARNING: directory '/u01/oracle/product' is not owned by root
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac10g01 rac10g01-priv rac10g01
node 2: rac10g02 rac10g02-priv rac10g02
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
/etc/profile: line 61: ulimit: open files: cannot modify limit: Operation not permitted
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac10g01
rac10g02
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.
[root@rac10g02 ~]#
如果出现以下错误,请手动修改public IP和cluster IP地址
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
[root@rac10g02 ~]# /u01/oracle/product/10.2.0.1/crs_1/bin/oifcfg setif -global eth0/192.168.3.0:public
[root@rac10g02 ~]# /u01/oracle/product/10.2.0.1/crs_1/bin/oifcfg setif -global eth0/10.0.0.0:cluster_interconnect
[root@rac10g02 ~]# /u01/oracle/product/10.2.0.1/crs_1/bin/oifcfg getif
eth0 192.168.3.0 global public
eth0 10.0.0.0 global cluster_interconnect
[root@rac10g02 ~]#
然后在rac10g01节点执行vipca,创建虚拟IP地址
[root@rac10g01 ~]# /u01/oracle/product/10.2.0.1/crs_1/bin/vipca
退出vipca窗口,继续安装cluster软件
13.验证cluster安装成功
rac10g01节点:
[oracle@rac10g01 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....g01.gsd application ONLINE ONLINE rac10g01
ora....g01.ons application ONLINE ONLINE rac10g01
ora....g01.vip application ONLINE ONLINE rac10g01
ora....g02.gsd application ONLINE ONLINE rac10g02
ora....g02.ons application ONLINE ONLINE rac10g02
ora....g02.vip application ONLINE ONLINE rac10g02
[oracle@rac10g01 ~]$
rac10g02节点:
[oracle@rac10g02 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....g01.gsd application ONLINE ONLINE rac10g01
ora....g01.ons application ONLINE ONLINE rac10g01
ora....g01.vip application ONLINE ONLINE rac10g01
ora....g02.gsd application ONLINE ONLINE rac10g02
ora....g02.ons application ONLINE ONLINE rac10g02
ora....g02.vip application ONLINE ONLINE rac10g02
[oracle@rac10g02 ~]$