Linux操作系统中运行Oracle RAC 10.2.0.4双节点(ora1和ora2)。ora1的两块盘损坏导致系统故障,剩下一个ora2正常运行并继续对外提供服务。重做完系统后,如何保证在应用不停机的情况下快速恢复RAC 的两节点环境呢?
方法如下:
1、ora1重做操作系统(版本、系统参数保持一致);
2、ora1上配置Oracle环境(ASM、裸设备等);
3、ora1上创建Oracle用户(UID以及GID与ora2保持一致)并配置互信访问机制;
4、将ora2节点的Oracle家目录进行打包并传到ora1节点,如下:
12 [root@ora2 ~]# tar -cf u01.tar /u01/app
[root@ora2 ~]# scp u01.tar ora1:/root
5、ora1上解压u01.tar并运行CRS的root.sh脚本:
运行root.sh就会重新将一些配置信息写入的/etc/init.d 等目录中。OCR 中因为已经有这个节点的资源了,所以会自动显示成功配置。
[root@ora1 /]# tar -xf /root/u01.tar
[root@ora1 /]# cd /u01/app/crs
[root@ora1 crs]# ./root.sh
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: ora1 ora1-priv ora1
node 2: ora2 ora2-priv ora2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
ora1
ora2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (0) nodes.
Creating GSD application resource on (0) nodes.
Creating ONS application resource on (0) nodes.
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes
Starting ONS application resource on (2) nodes
DONE.
6、复制ora2的/etc/oratab至ora1:
[root@ora2 ~]# ls -l /etc/oratab
-rw-rw-r-- 1 oracle root 765 Oct 30 2009 /etc/oratab
[root@ora2 ~]# cat /etc/oratab
.........
+ASM2:/u01/app/oracle/product/10.2.0/db_1:N
odb:/u01/app/oracle/product/10.2.0/db_1:N
[root@ora2 ~]# scp /etc/oratab ora1:/etc/
[root@ora1 ~]# chown oracle:root /etc/oratab
//编辑/etc/oratab文件,将里面的+ASM2修改为+ASM1.
[root@ora1 ~]# cat /etc/oratab
.........
+ASM1:/u01/app/oracle/product/10.2.0/db_1:N
odb:/u01/app/oracle/product/10.2.0/db_1:N
7、运行RDBMS的root.sh脚本:
[root@ora1 ~]# cd /u01/app/oracle/product/10.2.0/db_1/
[root@ora1 db_1]# ./root.sh
Running Oracle10 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/10.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
8、修改$ORACLE_HOME/network/admin/listener.ora文件,将ora2的相关信息替换为ora1的。另外,监听名称不再是默认的listener了,而是listener_ora1。
9、创建$ORACLE_HOME/dbs/下的spfile以及密码文件。
[oracle@ora1 ~]$ cd $ORACLE_HOME/dbs
[oracle@ora1 dbs]$ cp initodb2.ora initodb1.ora
[oracle@ora1 dbs]$ cp init+ASM2.ora init+ASM1.ora
[oracle@ora1 dbs]$ cp orapw+ASM2 orapw+ASM1
[oracle@ora1 dbs]$ cp orapwodb2 orapwodb1
10、启动所有rac资源: