*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_management=AUTO
.undo_tablespace=undotbs (undo tablespace which already exists)
.instance_name=
.instance_number=1
.thread=1
.local_listener=_
.instance_name=
.instance_number=2
.local_listener=_
.thread=2
.undo_tablespace=UNDOTBS2
.cluster_database = TRUE
.cluster_database_instances = 2
is equal to "1". is equal to "2", e.g. ORCL1, ORCL2.
5) change the location of control file in parameter file
local drive to shared cluster file system location
ie control_files='/control01.ctl'
to ie control_files='/control01.ctl'
6) create spfile from pfile( spfile should be stored in shared device)
export ORACLE_SID=ORCL1
sqlplus "/ as sysdba"
create spfile='/spfileORCL.ora' from pfile='/tmp/initORCL.ora';
exit
7) Create the $ORACLE_HOME/dbs/init.ora e.g. initORCL1.ora file that contains the following entry
spfile='spfile_path_name'
spfile_path_name is the complete path name of the SPFILE.
example :-
spfile='/cfs/spfile/spfileORCL1.ora'
8) create new password file for ORCL1 instance.
orapwd file=orapwORCL1 password=oracle
9) start the database in mount stage
10) Rename the datafile,redo logs to new shared device
alter database rename file '' to ' 11) Add second instance redo logs (or more when multiple instances will be started)
alter database
add logfile thread 2
group 3 ('group 4 (' alter database enable public thread 2;
12) create the second (or more) instance undo tablespace from existing instance
Path and file name will different for your environment
CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE
'/dev/RAC/undotbs_02_210.dbf' SIZE 200M ;
13) Open your database (i.e. alter database open;) and run $ORACLE_HOME/rdbms/admin/catclust.sql to create cluster database specific views within the existing instance
2. On the second node and other nodes
14) Set ORACLE_SID and ORACLE_HOME environment variables on the second node
15) Create the $ORACLE_HOME/dbs/init.ora e.g. initORCL2.ora file for the second node the same way as with point 7.
16) create new password file for second instance ORCL2 instance as in point 8
orapwd file=orapwORCL2 password=oracle
17) Start the second Instance
3. on one of the nodes
18) After configuring the listener,you have to add the database in cluster as below
srvctl add database -d -o -p
srvctl add instance -d -i -n
srvctl add instance -d -i -n
19) in case ASM is used, add the rdbms instance / asm dependency, e.g.
srvctl modify instance -d -i -s <+ASM1>
整个单机到RAC的转换过程时间并不长,提前测试和准备好初始化参数文件、执行脚本将进一步缩短这部分时间。
更多单机Database转换为RAC Database的内容请参考文章《How to Convert 10g Single-Instance database to 10g RAC using Manual Conversion procedure (Doc ID 747457.1)》,该文章适用于10g+的数据库。
完成单机到RAC的转换后,后续完成调整DG参数,调整IP地址等工作即完成了数据库的跨平台迁移和从单机到RAC的转换工作。
--end--