• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer navigation

The Geek Diary

  • OS
    • Linux
    • CentOS/RHEL
    • Solaris
    • Oracle Linux
    • VCS
  • Interview Questions
  • Database
    • oracle
    • oracle 12c
    • ASM
    • mysql
    • MariaDB
  • DevOps
    • Docker
    • Shell Scripting
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

“mdadm: No arrays found in config file” – error on running ‘mdadm –assemble –scan’

by admin

The Problem

After system has booted md0 is missing and all LVs used on top of md0 are not mounted

# mount -a
mount: special device /dev/mapper/vg_test-x0 does not exist
mount: special device /dev/mapper/vg_test-y0 does not exist
# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=6 metadata=0.90 spares=1 UUID=73560e25:92fb30cb:1c74ff07:ca1df0f7
# cat /proc/mdstat
Personalities :
unused devices: [none]

More data to show that /dev/md0 is missing:

# mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory

md0 is not visible at all, /var/log/messages does not hold any IO issues on local disks used by md0.

The Solution

The error is due to wrong settings in /etc/mdadm.conf. Follow the steps outlined below to resolve the issue:

1. First scan all possible md devices events:

# mdadm --examine /dev/sd[a-z] | egrep 'Event|/dev/sd'

Or scan for all devices with detailed information about md raid UUID

# mdadm --examine /dev/sd[a-z]

mdadm examine command will try to check all available disks information and verify if they are part of any md raid.

Example output:

# mdadm --examine /dev/sd[a-z]

/dev/sdb:
Magic : a92b4efc
Version : 0.90.00
UUID : 08877d71:d7dc9c1b:16f3496b:a22042b7
Creation Time : Wed Aug 31 14:19:18 2016
Raid Level : raid5
Used Dev Size : 586061696 (558.91 GiB 600.13 GB)
Array Size : 2930308480 (2794.56 GiB 3000.64 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0

Update Time : Wed Sep 21 11:33:48 2016
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Checksum : 153be7ed - correct
Events : 202

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 16 0 active sync /dev/sdb

0 0 8 16 0 active sync /dev/sdb
1 1 8 48 1 active sync /dev/sdd
2 2 8 64 2 active sync /dev/sde
3 3 8 80 3 active sync /dev/sdf
4 4 8 96 4 active sync /dev/sdg
5 5 8 112 5 active sync /dev/sdh

So mdadm is able to find mdraid device with proper UUID of that md0 raid, UUID of md0 is: 08877d71:d7dc9c1b:16f3496b:a22042b7

2. Compare that UUID with the one inside /etc/mdadm.conf:

# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=6 metadata=0.90 spares=1 UUID=73560e25:92fb30cb:1c74ff07:ca1df0f7

Both UUID don’t actually match.

3. There is possibility to manually mount mdraid by giving each device as a part of md0 raid:

# mdadm --assemble /dev/md0 /dev/sdb /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
mdadm: /dev/md0 has been tarted with 6 drives.
# ls -l /dev/md0
brw-r----- 1 root disk 9, 0 Sep 23 11:18 /dev/md0
# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Wed Aug 31 14:19:18 2016
Raid Level : raid5
Array Size : 2930308480 (2794.56 GiB 3000.64 GB)
Used Dev Size : 586061696 (558.91 GiB 600.13 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Sep 21 11:33:48 2016
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

UUID : 08877d71:d7dc9c1b:16f3496b:a22042b7
Events : 0.202

Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 48 1 active sync /dev/sdd
2 8 64 2 active sync /dev/sde
3 8 80 3 active sync /dev/sdf
4 8 96 4 active sync /dev/sdg
5 8 112 5 active sync /dev/sdh

4. Now md0 is visible, scan for pv and vg:

# pvscan
PV /dev/md0 VG vg_data lvm2 [2.73 TB / 546.56 GB free]
Total: 1 [2.73 TB] / in use: 1 [2.73 TB] / in no VG: 0 [0 ]
# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg_data" using metadata type lvm2

5. Activate the vg now:

# vgchange -a y

6. Verify if LVs are now active and visible

# lvscan
ACTIVE '/dev/vg_data/lvm-admin' [200.00 GB] inherit
ACTIVE '/dev/vg_data/lvm-backup' [2.00 TB] inherit

7. Now run mount command

# mount -a

8. To actually fix wrong UUID in mdadm.conf execute below command:

– Create backup of current mdadm.conf

# cp /etc/mdadm.conf /etc/mdadm.conf.bak1

– Now replace current config file with below command:

# mdadm --examine --scan > /etc/mdadm.conf

Above command will update /etc/mdadm.conf with proper raid config stanza.

Filed Under: CentOS/RHEL 5, CentOS/RHEL 6, CentOS/RHEL 7, Linux

Some more articles you might also be interested in …

  1. Images preview with ngx_http_image_filter_module
  2. How To Configure 802.1q VLAN On NIC On CentOS/RHEL 7 and 8
  3. Understanding DM-multipath deamon (multipathd)
  4. Using vmstat to troubleshoot performance issues in Linux
  5. How to find Which Process Is Killing mysqld With SIGKILL or SIGTERM on Linux
  6. How to Recover Corrupted Root Partition from Rescue Mode in CentOS/RHEL 5,6
  7. openfortivpn Command Examples in Linux
  8. CentOS / RHEL 6 : How to Boot into single user mode
  9. nmcli: command not found
  10. lsusb: command not found

You May Also Like

Primary Sidebar

Recent Posts

  • pw-cat Command Examples in Linux
  • pvs: command not found
  • pulseaudio: command not found
  • pulseaudio Command Examples in Linux

© 2023 · The Geek Diary

  • Archives
  • Contact Us
  • Copyright