Oracle RAC 重建OCR和Votedisk

哈哈,刚说最后一篇,闲的无聊又搞了个测试。

环境:OS:redhat 5.8DB:Oracle 10.2.0.5 raw device

我们要养成经常备份ocr跟votedisk的习惯。但是ocr跟votedisk没有备份也是可以重建的。就像控制文件。但是过程较为麻烦。以下为详细步骤:

首先备份ocr跟votedisk:

[root@rac1 ~]#[root@rac1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows :Version:2Total space (kbytes): 1048296Used space (kbytes):3292Available space (kbytes) : 1045004ID: 1891074113Device/File Name: /dev/raw/raw1Device/File integrity check succeededDevice/File Name: /dev/raw/raw2Device/File integrity check succeededCluster registry integrity check succeeded[root@rac1 ~]# crsctl qeury css votediskUnknown parameter: qeury[root@rac1 ~]# crsctl query css votedisk0.0 /dev/raw/raw31.0 /dev/raw/raw42.0 /dev/raw/raw5[root@rac1 ~]# dd if=/dev/raw/raw3 of=/opt/votedisk.bak2097152+0 records in2097152+0 records out1073741824 bytes (1.1 GB) copied, 462.285 seconds, 2.3 MB/s[root@rac1 ~]# ocrconfig -export /opt/ocr.bak[root@rac1 ~]# cd /opt/[root@rac1 opt]# lsocr.bak ORCLfmap votedisk.bak删除配置:

[root@rac1 install]# ./rootdelete.shShutting down Oracle Cluster Ready Services (CRS):Stopping resources. This could take several minutes.Error while stopping resources. Possible cause: CRSD is down.Shutdown has begun. The daemons should exit soon.Checking to see if Oracle CRS stack is down…Oracle CRS stack is not running.Oracle CRS stack is down now.Removing script for Oracle Cluster Ready servicesUpdating ocr file for downgradeCleaning up SCR settings in '/etc/oracle/scls_scr'Cleaning up Network socket directories[root@rac2 install]# ./rootdelete.shShutting down Oracle Cluster Ready Services (CRS):Stopping resources. This could take several minutes.Error while stopping resources. Possible cause: CRSD is down.Shutdown has begun. The daemons should exit soon.Checking to see if Oracle CRS stack is down…Oracle CRS stack is not running.Oracle CRS stack is down now.Removing script for Oracle Cluster Ready servicesUpdating ocr file for downgradeCleaning up SCR settings in '/etc/oracle/scls_scr'Cleaning up Network socket directories在安装节点执行:[root@rac1 install]# ./rootdeinstall.shRemoving contents from OCR mirror device2560+0 records in2560+0 records out10485760 bytes (10 MB) copied, 0.521651 seconds, 20.1 MB/sRemoving contents from OCR device2560+0 records in2560+0 records out10485760 bytes (10 MB) copied, 0.496207 seconds, 21.1 MB/s重跑root脚本:

[root@rac1 crs]# ./root.shWARNING: directory '/u01/app/oracle' is not owned by rootWARNING: directory '/u01/app' is not owned by rootWARNING: directory '/u01' is not owned by rootNo value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crsChecking to see if Oracle CRS stack is already configuredSetting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u01/app/oracle' is not owned by rootWARNING: directory '/u01/app' is not owned by rootWARNING: directory '/u01' is not owned by rootSuccessfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: rac1 rac1-priv rac1node 2: rac2 rac2-priv rac2Creating OCR keys for user 'root', privgrp 'root'..Operation successful.Now formatting voting device: /dev/raw/raw3Now formatting voting device: /dev/raw/raw4Now formatting voting device: /dev/raw/raw5Format of 3 voting devices complete.Startup will be queued to init within 30 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes.rac1CSS is inactive on these nodes.rac2Local node checking complete.Run root.sh on remaining nodes to start CRS daemons.[root@rac2 crs]# ./root.shWARNING: directory '/u01/app/oracle' is not owned by rootWARNING: directory '/u01/app' is not owned by rootWARNING: directory '/u01' is not owned by rootNo value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crsChecking to see if Oracle CRS stack is already configuredSetting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u01/app/oracle' is not owned by rootWARNING: directory '/u01/app' is not owned by rootWARNING: directory '/u01' is not owned by rootclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: rac1 rac1-priv rac1node 2: rac2 rac2-priv rac2clscfg: Arguments check out successfully.NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedStartup will be queued to init within 30 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes.rac1rac2CSS is active on all nodes.Waiting for the Oracle CRSD and EVMD to startOracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeappsInvalid syntax used for option "nodevips". Check usage (vipca -help) for proper syntax.配置VIP[root@rac2 crs]# oifcfg getif[root@rac2 crs]# oifcfg iflisteth0 192.168.56.0eth1 192.168.11.0[root@rac2 crs]# oifcfg setif -global eth0/192.168.56.0:public[root@rac2 crs]# oifcfg setif -global eht1/192.168.11.0:cluster_interconnect[root@rac2 crs]# oifcfg getifeth0 192.168.56.0 global publiceht1 192.168.11.0 global cluster_interconnect运行vipca配置VIP查看结果:

我们首先去了象鼻山,那里景色秀丽神奇,

Oracle RAC 重建OCR和Votedisk

相关文章:

你感兴趣的文章:

标签云: