VCS sample configuration – Oracle Database

This is one of the simplest configuration of Oracle DB, under VCS. In practical scenarios you may have more mount points and multipathing configured. But once you get the basics behind configuring each resource and how to link them, its easy to configure any complex setup under VCS.

Prerequisites:

1. Oracle should be installed on a shared storage at mount point /oracle/product/10.2.0
2. All Data and other files will be stored under /u01/oradata
3. 192.168.1.4 will be used as a virtual IP to connect to DB. In case of failover this IP fails from one node to other

Open cluster configuration to in write mode

# haconf -makerw

Add and configure oracle service group ORASG

# hagrp -add ORASG
# hagrp -modify ORASG SystemList node01 0 node02 1
# hagrp -modify ORASG AutoStartList node01 node02

sample main.cf configuration for ORASG service group

group ORASG (
SystemList = { node01 = 0, node02 = 1 }
AutoStartList = { node01, node02 }
)

 

Adding the resources to the service group and configuring them

1. Oracle Database resource

# hares -add ORA_RES Oracle ORASG
# hares -modify ORA_RES Critical 0
# hares -modify ORA_RES Sid ORA01 ###(ORA01 -> Oracle DB instance Name)
# hares -modify ORA_RES Owner oracle ###(oracle -> Owner for Oracle)
# hares -modify ORA_RES Home /oracle/product/10.2.0 ####(/oracle/product/10.2.0 -> Oracle Home Directory)
# hares -modify ORA_RES Pfile /oracle/product/10.2.0/dbs/initORA01.ora ### optional parameter
# hares -modify ORA_RES Enabled 1
# hares -online ORA_RES -sys node01

Sample main.cf configuration for oracle resource

Oracle ORA_RES (
Sid = ORA01
Owner = oracle
Home = “/oracle/product/10.2.0”
Pfile = “/oracle/product/10.2.0/dbs/initVRT.ora”
)

2. Listener Resource

# hares -add LISTR_RES Netlsnr ORASG
# hares -modify LISTR_RES Critical 0
# hares -modify LISTR_RES Owner oracle ###(oracle -> Owner for Oracle)
# hares -modify LISTR_RES Home /oracle/product/10.2.0 ###(/oracle/product/10.2.0 -> Oracle Home Directory)
# hares -modify LISTR_RES TnsAdmin /oracle/product/10.2.0/network/admin
# hares -modify LISTR_RES Listener ORA01 ### optional parameter
# hares -modify LISTR_RES Enabled 1
# hares -online LISTR_RES -sys node01

sample main.cf configuration for listner

Netlsnr LISTR_RES (
Owner = oracle
Home = “/oracle/product/10.2.0”
TnsAdmin = “/oracle/product/10.2.0/net/admin”
Listener = LISTENER
LsnrPwd = S2cEjcD5s3Cbc
)

3. IP Resource

# hares -add IP_RES IP ORASG
# hares -modify IP_Prod Address 192.168.1.4 ###(192.168.1.4 -> Virtual IP)
# hares -modify IP_Prod Device e1000g0 ###(e1000g0 -> NIC Device)

sample main.cf configuration for IP resource

IP IP_RES (
Device = e1000g0
Address = “192.168.1.4”
)

4. NIC Resource

# hares -add NIC_RES NIC ORASG
# hares -modify NIC_RES Device e1000g0

sample main.cf configuration for NIC resource

NIC NIC_RES (
Device = e1000g0
)

5. Volume Resource

# hares -add ORAVOL_RES Volume ORASG
# hares -modify ORAVOL_RES Volume oravol ###(oravol -> volume for Oracle Home)
# hares -modify ORAVOL_RES DiskGroup oradg ###(oradg -> Diskgroup for volume)

sample main.cf configuration for volume resource

Volume ORAVOL_RES (
Volume = oravol
DiskGroup = oradg
)

6. Diskgroup Resource

# hares -add ORADG_RES DiskGroup ORASG
# hares -modify ORADG_RES DiskGroup oradg
# hares -modify ORADG_RES DiskGroupType private

sample main.cf configuration for diskgroup resource

DiskGroup ORADG_RES (
DiskGroup = oradg
)

7. Mount Resource

# hares -add ORAMOUNT_RES Mount ORASG
# hares -modify ORAMOUNT_RES BlockDevice /dev/vx/dsk/oradg/oravol
# hares -modify ORAMOUNT_RES FSType vxfs
# hares -modify ORAMOUNT_RES FsckOpt "%-y"
# hares -modify ORAMOUNT_RES MountPoint /u01/oradata

sample main.cf configuration for mount resource

Mount MOUNT_RES (
MountPoint = “/u01/oradata”
BlockDevice = “/dev/vx/dsk/oradg/oravol”
FSType = vxfs
FsckOpt = “-y”
)

 

Link the resources

# hares -link LISTR_RES ORA_RES
# hares -link ORA_RES IP_RES
# hares -link ORA_RES ORAMOUNT_RES
# hares -link ORAVOL_RES ORADG_RES
# hares -link ORAMOUNT_RES ORAVOL_RES
# hares -link IP_RES NIC_RES

on node01 :

# hagrp -online ORASG -sys node1

Now check # hastatus -sum and verify the status on SG.

Failover SG to node02 :

# hagrp -switch ORASG -to node02

Check # hastatus -sum and confirm the status of Service Group.
Once you have checked that the failover is proper, you can set the Critical attribute of all the resources to 1. This will ensure the failover in case any of the resource in the service group fails.
Save & Close the Cluster Configuration

# haconf -dump -makero
Related Post