• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer navigation

The Geek Diary

  • OS
    • Linux
    • CentOS/RHEL
    • Solaris
    • Oracle Linux
    • VCS
  • Interview Questions
  • Database
    • oracle
    • oracle 12c
    • ASM
    • mysql
    • MariaDB
  • DevOps
    • Docker
    • Shell Scripting
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

How to use mdadm to create a software mirror on top of multipath devices

by admin

The Problem

The mdadm tool was used to create a software RAID mirror using two device-mapper-multipath devices:

# /sbin/mdadm /dev/md0 --create --verbose --level=1 --raid-devices=2 /dev/mapper/ocrp1 /dev/mapper/ocrmirrorp1

The setup was then confirmed:

# /sbin/mdadm --detail /dev/md0
...
  Number   Major   Minor   RaidDevice State
    0     253        2        0      active sync   /dev/dm-2
    1     253        3        1      active sync   /dev/dm-3

Since the actual multipath device names (/dev/dm-N) are shown, the mappings of the more-friendly names (/dev/mapped/ocrp1) are also verified:

# /bin/ls -l /dev/mpath/
lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocrp1 -> ../dm-2
lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocrmirrorp1 -> ../dm-3
# /bin/ls -l /dev/mapper/
brw-rw---- 1 root disk 253,   2 Apr 23 11:15 ocrp1
brw-rw---- 1 root disk 253,   3 Apr 23 11:15 ocrmirrorp1

Since these convenient names map to the same multipath devices, the setup has been proven correct. After a reboot, mdadm shows the following:

# /sbin/mdadm --detail /dev/md0
...
Number   Major   Minor   RaidDevice State
  0       8       97        0      active sync   /dev/sdg1
  1       8      113        1      active sync   /dev/sdh1

The RAID is active but is not using the multipath devices as expected.

The Solution

This is actually a timing problem. During the system boot, the /etc/rcN.d scripts are starting mdadm before the multipath devices have been detected and ready. This is essentially a race condition because a larger number of multipath devices take longer to recognize and mdadm may be run before the multipath processing is complete.

The solution is to add the requisite file system handlers and devices into the /initrd file so that these will be available to the kernel at boot-time. This will allow the kernel to start processing the multipath devices earlier.

Note: The Linux kernel /vmlinuz file is built with support for only some very fundamental devices. File system handlers and common device drivers are compiled separately and packaged in the /initrd (initial ramdisk) file. The GRUB or LILO bootloader must first place the content of the /initrd file into memory and then load the kernel. The kernel then uses the initial ramdisk to obtain the necessary device drivers to access the root file system. Then the kernel switches to using the actual root filesystem and frees the memory used by the /initrd memory image.

This clever approach allows one kernel image to be provided with a device driver set tailored for each system, without wasting system memory to hold device driver and file system handler code which is never used.

To build a custom /initrd file including the multipath support, use the below technique:

1. Create a new initrd file including the multipath, device-mapper-multipath, and HBA driver:

# /sbin/mkinitrd -v /root/initrd-mp.img 2.6.18-prep --with=multipath --with=dm-multipath --with=lpfc --omit-raid-modules

To do the same on CentOS/RHEL 6 and 7, please have a look at below post.

How to Rebuild the “initramfs” with Multipath in CentOS/RHEL 6 and 7

2. Verify the file /etc/mdadm.conf is configured for the RAID device:

# /bin/cat /etc/mdadm.conf
DEVICE /dev/mapper/*
ARRAY /dev/md0 uuid=ccfe8a98:ea584ff2:2fad9d51:305ea2da
devices=/dev/mapper/ocrp1,/dev/mapper/ocrmirrorp1 level=raid1

3. Copy the new ramdisk image into the boot location expected by the bootloader:

# /bin/cp /root/initrd-mp.img /boot/

4. Add a new entry to the bootloader configuration file /boot/grub/grub.conf to use the new ramdisk image:

title MDADM-MP
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
initrd /initrd-mp.img
Note: Change the example GRUB stanza as appropriate. Your kernel version and root device will certainly be different from this example.

When using CentOS/RHEL 7 and grub2, please see below post for exact steps.

CentOS / RHEL 7 : How to Modify GRUB2 Arguments with grubby

5. Reboot your system and select the MDADM-MP entry from the GRUB menu.

Hint: If you still do not see the multipath devices being used, you will need to add a start-up script to run mdadm as the last boot step. One way to do this is to add the necessary command to /etc/rc.local if you do not want to write a full /etc/init.d/ service script.

Filed Under: CentOS/RHEL 5, CentOS/RHEL 6, CentOS/RHEL 7, Linux

Some more articles you might also be interested in …

  1. How To Generate An CentOS/RHEL 6 UEFI Bootable ISO Image
  2. How to Configure rsyslog to Filter/discard Specific IP Address in CentOS/RHEL 6,7
  3. Linux interview questions – Special permissions (SUID, SGID and sticky bit)
  4. Wallch (Wallpaper Changer) – Rotate Ubuntu Desktop Wallpapers
  5. znew Command Examples in Linux
  6. How to remove the noatime mount option from root mount point without reboot (CentOS/RHEL)
  7. How to Enable Remote Desktop to Share the Current Desktop Session in CentOS/RHEL 7
  8. showmount Command Examples in Linux
  9. cp: omitting directory – error while copying a directory in Linux
  10. hwclock Command Examples in Linux

You May Also Like

Primary Sidebar

Recent Posts

  • JavaFX ComboBox: Set a value to the combo box
  • Nginx load balancing
  • nginx 504 gateway time-out
  • Images preview with ngx_http_image_filter_module

© 2022 · The Geek Diary

  • Archives
  • Contact Us
  • Copyright