• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer navigation

The Geek Diary

  • OS
    • Linux
    • CentOS/RHEL
    • Solaris
    • Oracle Linux
    • VCS
  • Interview Questions
  • Database
    • oracle
    • oracle 12c
    • ASM
    • mysql
    • MariaDB
  • DevOps
    • Docker
    • Shell Scripting
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

How to replace failed root disk under Solaris Volume Manager (SVM)

by admin

How to replace a disk under ZFS in Solaris
Replacing a failed disk under VxVM

In the last post we discussed about replacing failed disk in ZFS. Now let us see some of the examples of replacing failed root disk under Solaris volume manager. Note that the failed root disk is under a mirrored configuration.

Replacing the failed root disk

Consider the mirrored root disk configuration as shown below containing the root and swap mirrors and sub mirrors. As shown the disk c1t0d0 is failed causing the root and swap sub mirrors (d11 and d1) to be in maintenance state.

# metastat -c
d10              m  6.0GB d11 (maint) d12         --- swap
    d11          s  6.0GB c1t0d0s1 (maint)
    d12          s  6.0GB c1t1d0s1
d0               m  74GB d1 (maint) d2            --- root
    d1           s  74GB c1t0d0s0 (maint)
    d2           s  74GB c1t1d0s0

1. First thing is to remove the sub mirrors in maintenance state. use the metadetach command to detach the failed submirrors :

Syntax :

# metadetach [mirror] [sub-mirror]

Detach the sub-mirrors d11 and d1 :

# metadetach d10 d11
# metadetach d0 d1

2. Next is to remove any metadb on the failed disk. Check for the metadbs configured in the system.

# metadb
        flags           first blk       block count
     a        u         16              8192            /dev/dsk/c1t0d0s7
     a        u         8208            8192            /dev/dsk/c1t0d0s7
     a        u         16400           8192            /dev/dsk/c1t0d0s7
     a        u         16              8192            /dev/dsk/c1t1d0s7
     a        u         8208            8192            /dev/dsk/c1t1d0s7
     a        u         16400           8192            /dev/dsk/c1t1d0s7

As shown above remove the 3 metadbs on c1t0d0s7 slice.

# metadb -d c1t0d0s7

3. We need to remove the disk from the OS before physically removing it from the server. Refer the below post to remove the disk using cfgadm or luxadm.

How to remove a failed disk using luxadm and cfgadm

4. Now add the new disk and run devfsadm to scan for the newly added disk.

# devfsadm

5. Once the disk is detected we need to copy the VTOC table from the c1t0d0 to the newly added disk using the fmthard command :

# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
   fmthard:  New volume table of contents now in place.

6. Install the bootblk on slice 0 of the new disk.

# installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t1d0s0

7. Add the metadbs we deleted.

# metadb -afc 3 c1t0d0s7

8. Update the device ID in the SVM database.

# metadevadm -u c1t0d0
   Updating Solaris Volume Manager device relocation information for c1t0d0
   Old device reloc information:
       id1,sd@n5000cca000368d24
   New device reloc information:
       id1,sd@n5000c500052fb987

9. Attach the detach sub mirrors using the metaattach command. The syntax to do so is :
Syntax :

# metattach [mirror] [submirror]

This would start the re-sync sub mirrors.

# metattach d10 d11
# metattach d0 d1

10. In case you did not detach the sub mirrors in step 1, run the metereplace command to start the sync which will return the sub mirrors to the ‘Okay’ state:

# metareplace -e d0  c1t0d0s0
   d0: device c1t0d0s0 is enabled
# metareplace -e d10  c1t0d0s1
   d10: device c1t0d0s1 is enabled

11. To check the synching status use metastat -c command :

# metastat -c

d10              m  6.0GB d11 (resync-53%) d12
    d11          s  6.0GB c1t0d0s1
    d12          s  6.0GB c1t1d0s1
d0               m  74GB d1 (resync-27%) d2
    d1           s  74GB c1t0d0s0
    d2           s  74GB c1t1d0s0

To continuously check the status of sync every 10 seconds use :

# while true; do metastat | grep done; sleep 10; done

For x86/x64 hardware

In case of a x86/x64 hardware, we need to do some extra steps for the disk replacements. First we need to create a new partition with fdisk before we can copy the VTOC table from the disk c1t1d0.

# fdisk -b /usr/lib/fs/ufs/mboot /dev/rdsk/c1t0d0p0
No fdisk table exists. The default partition for the disk is:

  a 100% "SOLARIS System" partition

Type "y" to accept the default partition,  otherwise type "n" to edit the
 partition table.
y

Secondly, we need to install grub instead of bootblk onto the new disk.

# /sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
How to replace a disk under ZFS in Solaris
Replacing a failed disk under VxVM

Filed Under: SVM

Some more articles you might also be interested in …

  1. Solaris Volume Manager (SVM) : How to Use Mirrors to Roll Back System Changes
  2. SVM : How to set boot device at OBP for mirrored root disk
  3. Solaris 10 patching with SVM : Traditional method (non-live upgrade)
  4. Solaris Volume Manager (SVM) : Growing RAID 5 metadevices online
  5. The ultimate Solaris Volume Manager (SVM) interview questions
  6. SVM : How to Use Metadevadm to Maintain Device Relocation Information After Disk Replacement
  7. Solaris Volume Manager (SVM) command line reference (Cheat Sheet)
  8. SVM – How to create Soft Partitions
  9. Solaris Volume Manager (SVM) : Growing mirrored metadevices online
  10. Solaris : How to run savecore manually while booted in single user from CDROM

You May Also Like

Primary Sidebar

Recent Posts

  • fprintd-delete Command Examples in Linux
  • fprintd-delete: command not found
  • foreman: command not found
  • foreman Command Examples in Linux

© 2023 · The Geek Diary

  • Archives
  • Contact Us
  • Copyright