• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

The Geek Diary

CONCEPTS | BASICS | HOWTO

  • OS
    • Linux
    • CentOS/RHEL
    • Solaris
    • Oracle Linux
    • Linux Services
    • VCS
  • Database
    • oracle
    • oracle 12c
    • ASM
    • mysql
    • MariaDB
    • Data Guard
  • DevOps
    • Docker
    • Shell Scripting
  • Interview Questions
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

“mdadm: No arrays found in config file” – error on running ‘mdadm –assemble –scan’

By admin

The Problem

After system has booted md0 is missing and all LVs used on top of md0 are not mounted

# mount -a
mount: special device /dev/mapper/vg_test-x0 does not exist
mount: special device /dev/mapper/vg_test-y0 does not exist
# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=6 metadata=0.90 spares=1 UUID=73560e25:92fb30cb:1c74ff07:ca1df0f7
# cat /proc/mdstat
Personalities :
unused devices: [none]

More data to show that /dev/md0 is missing:

# mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory

md0 is not visible at all, /var/log/messages does not hold any IO issues on local disks used by md0.

The Solution

The error is due to wrong settings in /etc/mdadm.conf. Follow the steps outlined below to resolve the issue:

1. First scan all possible md devices events:

# mdadm --examine /dev/sd[a-z] | egrep 'Event|/dev/sd'

Or scan for all devices with detailed information about md raid UUID

# mdadm --examine /dev/sd[a-z]

mdadm examine command will try to check all available disks information and verify if they are part of any md raid.

Example output:

# mdadm --examine /dev/sd[a-z]

/dev/sdb:
Magic : a92b4efc
Version : 0.90.00
UUID : 08877d71:d7dc9c1b:16f3496b:a22042b7
Creation Time : Wed Aug 31 14:19:18 2016
Raid Level : raid5
Used Dev Size : 586061696 (558.91 GiB 600.13 GB)
Array Size : 2930308480 (2794.56 GiB 3000.64 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0

Update Time : Wed Sep 21 11:33:48 2016
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Checksum : 153be7ed - correct
Events : 202

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 16 0 active sync /dev/sdb

0 0 8 16 0 active sync /dev/sdb
1 1 8 48 1 active sync /dev/sdd
2 2 8 64 2 active sync /dev/sde
3 3 8 80 3 active sync /dev/sdf
4 4 8 96 4 active sync /dev/sdg
5 5 8 112 5 active sync /dev/sdh

So mdadm is able to find mdraid device with proper UUID of that md0 raid, UUID of md0 is: 08877d71:d7dc9c1b:16f3496b:a22042b7

2. Compare that UUID with the one inside /etc/mdadm.conf:

# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=6 metadata=0.90 spares=1 UUID=73560e25:92fb30cb:1c74ff07:ca1df0f7

Both UUID don’t actually match.

3. There is possibility to manually mount mdraid by giving each device as a part of md0 raid:

# mdadm --assemble /dev/md0 /dev/sdb /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
mdadm: /dev/md0 has been tarted with 6 drives.
# ls -l /dev/md0
brw-r----- 1 root disk 9, 0 Sep 23 11:18 /dev/md0
# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Wed Aug 31 14:19:18 2016
Raid Level : raid5
Array Size : 2930308480 (2794.56 GiB 3000.64 GB)
Used Dev Size : 586061696 (558.91 GiB 600.13 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Sep 21 11:33:48 2016
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

UUID : 08877d71:d7dc9c1b:16f3496b:a22042b7
Events : 0.202

Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 48 1 active sync /dev/sdd
2 8 64 2 active sync /dev/sde
3 8 80 3 active sync /dev/sdf
4 8 96 4 active sync /dev/sdg
5 8 112 5 active sync /dev/sdh

4. Now md0 is visible, scan for pv and vg:

# pvscan
PV /dev/md0 VG vg_data lvm2 [2.73 TB / 546.56 GB free]
Total: 1 [2.73 TB] / in use: 1 [2.73 TB] / in no VG: 0 [0 ]
# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg_data" using metadata type lvm2

5. Activate the vg now:

# vgchange -a y

6. Verify if LVs are now active and visible

# lvscan
ACTIVE '/dev/vg_data/lvm-admin' [200.00 GB] inherit
ACTIVE '/dev/vg_data/lvm-backup' [2.00 TB] inherit

7. Now run mount command

# mount -a

8. To actually fix wrong UUID in mdadm.conf execute below command:

– Create backup of current mdadm.conf

# cp /etc/mdadm.conf /etc/mdadm.conf.bak1

– Now replace current config file with below command:

# mdadm --examine --scan > /etc/mdadm.conf

Above command will update /etc/mdadm.conf with proper raid config stanza.

Filed Under: CentOS/RHEL 5, CentOS/RHEL 6, CentOS/RHEL 7, Linux

Some more articles you might also be interested in …

  1. How to create LXC container using lxcbr0 and virbr0 in CentOS/RHEL
  2. CentOS / RHEL 7 : Configuring an NFS server and NFS client
  3. Linux / UNIX : Examples of find command to find files with specific sets of permissions
  4. Beginners guide to Apache HTTP Server – Installation and Configuration
  5. How to scan newly Assigned LUNs in Multipathd under CentOS / RHEL
  6. Linux OS Service ‘sshd’
  7. Downloading a Specific Version of Package and Its Dependencies from Repository for Offline Installation Using YUM
  8. How To Create a Local Yum Repository for MySQL Enterprise Packages
  9. How to create partitions and file systems on DM-Multipath devices
  10. /etc/rsyslog.conf – Setup a Filter to Discard or Redirect Messages

You May Also Like

Primary Sidebar

Recent Posts

  • How to Disable IPv6 on Ubuntu 18.04 Bionic Beaver Linux
  • How to Capture More Logs in /var/log/dmesg for CentOS/RHEL
  • Unable to Start RDMA Services on CentOS/RHEL 7
  • How to rename a KVM VM with virsh
  • Archives
  • Contact Us
  • Copyright

© 2021 · The Geek Diary