Oracle 10g下ocr和votedisk的管理(3)

如何重新创建ocr和votedisk(在ocr和votedisk损坏,并且没有备份时,这是非常有用的)?

首先,在所有的节点停止cluster

[root@node1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
  Version                  :          2
  Total space (kbytes)    :    1125736
  Used space (kbytes)      :      3852
  Available space (kbytes) :    1121884
  ID                      :  849560479
  Device/File Name        : /dev/raw/raw1
                                    Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

[root@node1 ~]# crsctl query css votedisk
 0.    0    /dev/raw/raw2
 1.    0    /dev/raw/raw4

located 2 votedisk(s).
[root@node1 ~]# crsctl check crs
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM
[root@node1 ~]# ssh node2 /u01/app/crs_home/bin/crsctl check crs
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM

在每个节点执行rootdelete.sh脚本

[root@node1 ~]# cd $CRS_HOME/install
[root@node1 install]# ./rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'
Cleaning up Network socket directories
[root@node1 install]# ssh node2 /u01/app/crs_home/install/rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'
Cleaning up Network socket directories

在任意节点上,执行rootdeinstall.sh脚本(只需在一个节点上运行即可)

[root@node1 install]# ./rootdeinstall.sh

Removing contents from OCR device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 0.486033 seconds, 21.6 MB/s

在所有节点上执行root.sh脚本

[root@node1 crs_home]# pwd
/u01/app/crs_home
[root@node1 crs_home]# ./root.sh
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw2
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
 node1
CSS is inactive on these nodes.
 node2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@node1 crs_home]# ssh node2 /u01/app/crs_home/root.sh
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
 node1
 node2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Invalid interface "255.255.255.0/eth0" entered in an input argument.

[root@node1 crs_home]# oifcfg iflist
eth0  192.168.100.0
eth1  100.100.100.0
[root@node1 crs_home]# crs_stat -t -v
CRS-0202: No resources are registered.

[root@node1 crs_home]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

[root@node1 crs_home]# ssh node2 /u01/app/crs_home/bin/crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

从上面可以看出,crs已经正常运行,但是vip one gns等资源没有配置成功(可能是因为执行我修改过ip的原因),手工调用vipca图形界面后:

[root@node1 ~]# crs_stat -t -v
Name          Type          R/RA  F/FT  Target    State    Host       
----------------------------------------------------------------------
ora.node1.gsd  application    0/5    0/0    ONLINE    ONLINE    node1     
ora.node1.ons  application    0/3    0/0    ONLINE    ONLINE    node1     
ora.node1.vip  application    0/0    0/0    ONLINE    ONLINE    node1     
ora.node2.gsd  application    0/5    0/0    ONLINE    ONLINE    node2     
ora.node2.ons  application    0/3    0/0    ONLINE    ONLINE    node2     
ora.node2.vip  application    0/0    0/0    ONLINE    ONLINE    node2

添加剩余资源到rac中,如asm,instance,linstener

调用netca添加listener,使用srvctl添加其他服务,至此重建ocr和votedisk完成

有一点需要注意,ocr和votedisk重建后,ocr磁盘和votedisk磁盘会设置为cluster新建时的初始值,如何后续我们对此进行过修改,则需要我们手工来进行再次维护了

[root@node1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
  Version                  :          2
  Total space (kbytes)    :    1125736
  Used space (kbytes)      :      3760
  Available space (kbytes) :    1121976
  ID                      : 1334010282
  Device/File Name        : /dev/raw/raw1
                                    Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

[root@node1 ~]# crsctl query css votedisk
 0.    0    /dev/raw/raw2

located 1 votedisk(s).
[root@node1 ~]#

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/95136e712b43aec38cbb9885e7884912.html