BCV (Business Continuance Volume ) is the target volume for EMC Symmetrix TimeFinder/Mirror process. When a BCV is fully synchronized with a data device, the BCV is separated or split, thus becomes available to a host for backup or other host processes. In short, a BCV split is a device to be a clone of another device. Before you use BCV split as oracle backup, you need to narrow down what is contained on the backed up volumes. You can use BCV split to backup only asm disks together with rdbms datafiles, redologs, and controlfiles. BCV split of the GRID_HOME will not work.
Whenever you use BCV split, the process of cloning consists of the following stages:
- Install Grid Infrastructure software on the destination.
- Mount bcv split volumes and check access to the disks.
- Bring up asm instance and mount disk groups.
- Bring up rdbms instance and recover the database.
1. INSTALL GRID INFRASTRUCTURE SOFTWARE ON THE DESTINATION
The Oracle version should match on source and destination servers. Even if you have a RAC setup on the source end, you can install a standalone grid version at the destination.
2. MOUNT BCV SPLIT VOLUMES AND CHECK ACCESS TO THE DISKS
1. If you encounter problems mounting BCV split, contact EMC.
http://www.emc.com
http://www.emc.com/collateral/software/white-papers/h5550-backup-oracle-symmetrix-netwrk-powernsap-wp.pdf
2. After BCV split is mounted on the destination, make sure the software owner (oracle or grid depending on the version) has full access to the disks. For that use kfod utility. We can simulate the disk discovery from the operating system level, using tool kfod. ($ORACLE_HOME/bin). The execution syntax is:
$ kfod asm_diskstring='/dev/rdsk/*' disks=all
or
$ kfod asm_diskstring='dev/oracleasm/disks/*' disks=all
or
$ kfod asm_diskstring='ORCL:*' disks=all
Example of syntax:
$ cd $ORACLE_HOME/bin $ kfod disks=all $ kfod status=TRUE dscvgroup=TRUE asm_diskstring='/dev/xvd*' disks=all $ kfod status=TRUE dscvgroup=TRUE asm_diskstring='/dev/rhdisk*' disks=all $ kfod status=TRUE dscvgroup=TRUE asm_diskstring='/dev/emcpower*' disks=all
where asm_diskstring is your valid path to the disks.
3. Check disk permissions, source and destination should match. owner:group of the cloned disks need to be owned by the ASM OS user.
4. If kfod output receive the next error that would indicate that the candidate raw devices are not accessible at OS level by the grid or oracle OS users, therefore ASM or ASM tools will not be able to detect or use it until this problem is fixed:
ORA-15025: could not open disk "/dev/rdsk/c3t60080E500017CAA8000006954EEA57CDd0s0" SVR4 Error: 13: Permission denied
In such case please connect as oracle or grid OS user and verify you can read and write using the dd OS command as follow on the raw devices as follow:
$ id $ dd if=/dev/null of=/dev/rdsk/c3t60080E500017CAA8000006954EEA57CDd0s0 bs=8192 count=12800
If dd reports the same problems then you must engage your OS and Storage support to correct the access on the disks/raw devices to the grid and oracle OS users. Do not proceed to the next step until kfod utility returns correct access to the disks. Output will be similar to the following (user and group should match on the source and destination)
$ kfod asm_diskstring='/dev/rhdisk*' disks=all -------------------------------------------------------------------------------- Disk Size Path User Group ================================================================================ 1: 102400 Mb /dev/rhdisk2 grid asmadmin 2: 102400 Mb /dev/rhdisk3 grid asmadmin 3: 102400 Mb /dev/rhdisk4 grid asmadmin 4: 102400 Mb /dev/rhdisk5 grid asmadmin -------------------------------------------------------------------------------- ORACLE_SID ORACLE_HOME ================================================================================ +ASM /u01/app/grid/product/11.2.0/grid
or
$ kfod asm_diskstring='/dev/rhdisk*' disks=all -------------------------------------------------------------------------------- Disk Size Path User Group ================================================================================ 1: 102400 Mb /dev/rhdisk2 oracle dba 2: 102400 Mb /dev/rhdisk3 oracle dba 3: 102400 Mb /dev/rhdisk4 oracle dba 4: 102400 Mb /dev/rhdisk5 oracle dba -------------------------------------------------------------------------------- ORACLE_SID ORACLE_HOME ================================================================================ +ASM /u01/app/grid/product/11.1.0/grid
3. BRING UP ASM INSTANCE AND MOUNT DISKGROUPS
Below are examples for 11gR2 Standalone (non-RAC). Note that CRS and ASM are together in 11gR2 vs. 10g or 11gR1 so clusterware processes have to be up first.
1. Set environment. For example:
$ export ORACLE_HOME=/oracle/ora11g/11.2.0/grid $ export PATH=$ORACLE_HOME/bin:$PATH $ export ORACLE_SID=+ASM
or
$ . oraenv ORACLE_SID = +ASM
2. Check if clusterware resources are up as root:
# cd [Grid Home]/bin # ./crsctl status resource -t
If has, ora.cssd resource is offline, manually start it.
# ./crsctl start has # ./crsctl start resource ora.cssd # ./crsctl status resource -t
3. Startup ASM and mount ORA_CRS disk group, where ORA_CRS is the disk group name. One of the options to do it using pfile and not spfile. If CRS is up, you may be able to startup using spfile which is stored inside asm.
SQL> startup nomount ; SQL> alter diskgroup ORA_CRS mount ;
As root startup crsd:
# crsctl start resource ora.crsd -init
Once they are started, try to manually mount all diskgroups that are currently visible on this server .
SQL> alter diskgroup [DG] mount ;
4. If you get spile errors, then you need to create/recreate the SPFILE for ASM. The default is to start asm without pfile or spfile, so it is a very common problem. Use extra care in regards to the asm_diskstring parameter as it impacts the discovery of the voting disks. Verify the previous settings using the ASM alert log on the source where you took BCV split.
Prepare a pfile (e.g. /tmp/asm_pfile.ora) with the ASM startup parameters – these may vary from the example below. If in doubt consult the ASM alert log as the ASM instance startup should list all non-default parameter values. Please note the last startup of ASM (in step 2 via CRS start) will not have used an SPFILE, so a startup prior to the loss of the CRS disk group would need to be located.
*.asm_power_limit=1 *.diagnostic_dest='/u01/app/oragrid' *.instance_type='asm' *.large_pool_size=12M *.remote_login_passwordfile='EXCLUSIVE'
Now the SPFILE can be created using this PFILE:
$ sqlplus / as sysasm SQL> create spfile='+CRS' from pfile='/YOUR_LOCATION/asm_pfile.ora'; File created. SQL> startup nomount ; SQL> alter diskgroup ORA_CRS mount ;
As root startup crsd:
# crsctl start resource ora.crsd -init
5. Check state of disks:
SQL> set linesize 200 SQL> column path format a45 SQL> column name format a20 SQL> column failgroup format a15 SQL> select group_number, disk_number, mount_status, header_status, state, failgroup, name, path from v$asm_disk;
If the disks show as Member, then you should be able to mount diskgroups.
6. Once all correct disks show as MEMBER, you can mount diskgroups that are currently visible on this server .
SQL> alter diskgroupmount ; SQL> select group_number, name, state, type, total_mb, free_mb from v$asm_diskgroup;
7. Check that asm is up and that correct values are recognized for asm_disksgroups and asm_diskstring.
SQL> select * from v$instance; SQL> show parameter asm;
4. BRING UP RDBMS INSTANCE AND RECOVER THE DATABASE
1. ASM is up, diskgroups are mounted, now you are ready to open the database. Start listener from the Grid home.
$ lsnrctl start $ lsnrctl status
2. Set environment to point to your database home (which should already be installed or cloned):
$ export ORACLE_HOME=/oracle/ora11g/11.2.0/db_home $ export PATH=$ORACLE_HOME/bin:$PATH $ export ORACLE_SID=YOUR_DB_SID
or
$ . oraenv ORACLE_SID = YOUR_DB_NAME
3. Start database.
$ sqlplus / as sysdba SQL> startup
4. Check that the previous command actually opened the database:
SQL> select name, open_mode from v$database; NAME OPEN_MODE --------- -------------------- YOUR_DB_NAME READ WRITE