The Ask
While Installing the 12c Grid Infrastructure (GI) with the intention of having 3 copies of voting disks and OCR files, we created +GRID diskgroup with 3 underlying luns. However, because we chose “external redundancy” for the diskgroup ended up having just one copy of voting disk and OCR files with no option of multiplexing them in the same group.
The Solution
Before we get into the solution to this ask, lets first see what exactly is “external”, “normal” and “high” redundancy.
Disk group redundancy
If you specify redundancy during the creation of the disk group, then Oracle ASM stores a copy of extent into another failure group in the same disk group. Failure groups are created for NORMAL and HIGH redundancy disk groups. A disk group with EXTERNAL redundancy should use an external file system to manage mirroring (using RAID or any other means).
There are three types of disk groups based on the Oracle ASM redundancy level. The redundancy levels are:
- External redundancy: In case of external redundancy, Oracle ASM does not take care of mirroring your data. It’s assumed that the underlying file system has the necessary capability to mirror the blocks (for example, using a RAID configuration or by any other means). But in the case of external redundancy, any write error causes a forced dismount of the disk group.
- Normal redundancy: In case of NORMAL REDUNDANCY, Oracle ASM mirrors the extents in the file. So we have two copies of every extent in a file. A loss of one ASM disk does not cause data loss because we have a mirror copy still available. Space requirement in the case of NORMAL REDUNDANCY is double that of the external redundancy, so we have to make sure that we have double the space required by the database.
- High redundancy: In case of HIGH redundancy, Oracle ASM makes two copies of the original data, so we have three copies in total. Because of this, loss of disk in two different failure groups is tolerated. If we don’t have enough online failure groups to satisfy the disk group mirroring, Oracle ASM allocates as many mirrors as possible and it allocates the remaining required mirror once the sufficient number of failure groups are available. This redundancy needs thrice the amount of space than data.
Moving the ASM spfile from External to Normal Redundancy
Moving of ASM spfile cannot be done online as other instances will be using it. So, there will be some downtime involved in this solution. Follow the steps below to perform the migration:
1. Stop the CRS on all the nodes.
2. Take a backup of existing spfile.
ASMCMD> spcopy +DATA/cehaovmsp1clu05/ASMPARAMETERFILE/registry.253.879011727 /home/oracle
3. Then Move the spfile to GRID_HOME/dbs folder
ASMCMD> spmove +DATA/cehaovmsp1clu05/ASMPARAMETERFILE/registry.253.879011727 /u01/app/12.1.0/grid/dbs
At this point of time you will get below error
ORA-15032: not all alterations performed ORA-15028: ASM file '+DATA/cehaovmsp1clu05/ASMPARAMETERFILE/registry.253.879011727' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
You can ignore it and come out of the ASMCMD prompt and then login back to ASMCMD to get the details of spfile
ASMCMD> spget /u01/app/12.1.0/grid/dbs/registry.253.879011727
4. Create a directory structure in new diskgroup for ASMPARAMETER file
ASMCMD> cd vote ASMCMD> mkdir cehaovmsp1clu05 ASMCMD> cd cehaovmsp1clu05 ASMCMD> mkdir ASMPARAMETERFILE ASMCMD> cd ASMPARAMETERFILE ASMCMD> pwd +vote/cehaovmsp1clu05/ASMPARAMETERFILE
5. Now move the spfile from the filesystem location to new diskgroup
ASMCMD> spmove /u01/app/12.1.0/grid/dbs/registry.253.879011727 +vote/cehaovmsp1clu05/ASMPARAMETERFILE/spfileMoveASM.ora
6. Check the new spfile location
ASMCMD> spget +vote/cehaovmsp1clu05/ASMPARAMETERFILE/spfileMoveASM.ora
7. Check the current instance ASM parameter file changed in the gpnp profile:-
$ gpnptool get Warning: some command line parameters were defaulted. Resulting command line: /u01/app/12.1.0/grid/bin/gpnptool.bin get -o- WlpxmGJZQB7TfmykzrttEEoWq6o=RefE5Nampd1BptSnsyzbhe0hOAMBtMd1SyxXvxDS/K+c0xkTccuyMZsJWJxWpzeFXXHzV+SxmKHRo6NRwPkpbksYTf5ibROeOD+i3fFqUgZVlu0IfdMFB8QfQ5J96jXV+6zdIxgWZJWcSkHSk6pzjslcUrXoNm30U3RSflJrv7Q= Success.
8. Now start the all other nodes and the gpnp profile change will get propagated when it is starting up.
9. You can use below document to move the MGMT from the existing diskgroup to new diskgroup.