This is required when there is no RAID controller card available for the server and the only solution is software RAID. This post makes changes to partitions, so if there is any existing data on any disks being used for this purpose, it should be backed up prior to creating the RAID array as per standard best practices.
1. In order to mirror the disk, information about partition has to be obtained. This can be done with one of the following commands:
# parted /dev/sda u s p # fdisk -l /dev/sda
2. The partition table has to be cloned by using the following command:
# sgdisk -R /dev/sdb /dev/sda
3. After cloning the partition, the new drive needs a GUID:
# sgdisk -G /dev/sdb
4. All partitions which will be mirrored need to have the RAID flag:
# parted /dev/sda setraid on # parted /dev/sdb set raid on # parted /dev/sdb set raid on # parted /dev/sdb set raid on
5. Create a new RAID device on the partition from the new disk on the equivalent boot partition on the new disk (so if /boot is mounted on /dev/sda1, on the RAID device it needs to be on /dev/sdb1)
# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1 --metadata=1.0
6. Create the same filesystem as the one which is used by the /boot/efi partition. Usually, the vfat system is the default one:
# mkfs.vfat /dev/md1
7. The new raid array has to be mounted and also the files from current /boot partition have to be copied:
# mkdir /mnt/md1/efi # mount /dev/md1 /mnt/md1/efi # rsync -a /boot/ /mnt/md1/ # sync # umount /mnt/md1/efi # rmdir /mnt/md1
8. The current /boot partition has to be unmounted and the new one must take its place:
# umount /boot/efi # mount /dev/md1 /boot/efi
9. In order to complete the mirroring process, the old disk has to be added to the new array:
#mdadm /dev/md1 -a /dev/sda1
10. The RAID status can be monitored with the following command:
# mdadm -D /dev/md1
11. In order to boot from the RAID disk, /etc/fstab file has to be edited, but for that the UUID of the new device is required:
# blkid | grep md1
12. The UUID obtained from the previous step has to replace the old one in /etc/fstab. The file can be edited using vi and it is better to comment the current line and add it again just below, but of course with UUID changed:
# cat /etc/fstab #UUID=6d36b3b0-0238-4c86-9368-f60b571fbab9 /boot xfs defaults 0 0 UUID="new UUID" /boot xfs defaults 0 0
For LVM partition (use a different index for new device)
1. The RAID device has to be created on the partition which has the same index as the current one:
# mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb2 --metadata=1.2
2. This new array needs to be added to the same volume group, in which the current lvm device lies:
# vgextend vgname /dev/md2
3. The physical extents need to be moved from the old partition to the new array(this will take some time to complete):
# pvmove /dev/sda2 /dev/md2
4. After that the old partition has to be removed from the volume group and from LVM:
# vgreduce vgname /dev/sda2 # pvremove /dev/sda2
5. In order to not encounter any errors related to LVM, use_lvmetad parameter value has to be changed from 1 to 0 within the /etc/lvm/lvm.conf file:
# vi /etc/lvm/lvm.conf ............... use_lvmetad = 0 ...............
Then, the lvm2-lvmetad service needs to be stopped, disabled and masked:
# systemctl stop lvm2-lvmetad.service # systemctl disable lvm2-lvmetad.service --now # systemctl disable lvm2-lvmetad.socket --now # systemctl mask lvm2-lvmetad.socket
6. In order to complete the mirroring process, the old partition has to be added to the array:
# mdadm /dev/md2 -a /dev/sda2
7. The RAID status can be monitored with the following command:
# mdadm -D /dev/md2
For the SWAP on a separate partition and not under LVM
1. For swap the RAID array needs to be created a little bit different and the following commands can be used, assuming the swap is under sda3:
# mdadm --create swap --level=1 --raid-devices=2 missing /dev/sdb3 # mkswap /dev/md/swap # mdadm /dev/md/swap -a /dev/sda3
2. The RAID status can be monitored with the following command:
# mdadm -D /dev/md/swap
3. After boot, swap and root partitions have been mirrored, the metadata needs to be scanned and placed within the /etc/mdadm.conf file
# mdadm --examine --scan >/etc/mdadm.conf
4. Then, the /etc/default/grub file needs to be updated with the new UUIDs on the GRUB_CMDLINE_LINUX line:
The UUIDs can be obtained with the following command:
# mdadm -D /dev/md* | grep UUID
Edit the grub file, by adding the new entries:
# vi /etc/default/grub #GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet" GRUB_CMDLINE_LINUX="crashkernel=auto rd.md.uuid=first uuid rd.md.uuid=second uuid rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet"
5. Then, update the grub2.cfg file:
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
6. Update EFI bootmgr. For that, the old EFI boot entry must be removed:
# efibootmgr -v | grep Boot
output(example of efibootmgr command from a virtual Server):
BootCurrent: 0001 BootOrder: 0001,0006,0008,0004,0000,0005 Boot0000* Windows Boot Manager Vendor(99e275e7-75a0-4b37-a2e6-c5385e6c00cb,)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}...N............... Boot0001* Oracle VM Server HD(1,800,3f800,91dfa48e-aad0-4a31-9ffe-e4356d8a36c6)File(\EFI\REDHAT\SHIM.EFI) Boot0004* Generic Usb Device Vendor(99e275e7-75a0-4b37-a2e6-c5385e6c00cb,) Boot0005* CD/DVD Device Vendor(99e275e7-75a0-4b37-a2e6-c5385e6c00cb,) Boot0006* WDC WD10EZEX-08WN4A0 BIOS(2,0,00)..BO Boot0008* IBA CL Slot 00FE v0113 BIOS(6,0,00)..BO
Based on the output of this command, the EFI boot entries have to be deleted with the following command:
# efibootmgr -b 1 -B
In this case is only one entry and its number is 1. After that, both EFI partitions will need to be added to the bootmgr:
# efibootmgr -c -d /dev/sda -p1 -l \\EFI\\redhat\\shimx64.efi -L "Oracle Linux RAID SDA" # efibootmgr -c -d /dev/sdb -p1 -l \\EFI\\redhat\\shimx64.efi -L "Oracle Linux RAID SDB"
7. The initramfs image has to be rebuilt using mdadmconf:
# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak # dracut -f --mdadmconf
8. Reboot the machine in order to check if everything is working as it should and also if the new RAID devices are being used.