mdadm Command Shows State : active, degraded

The Problem

On CentOS/RHEL 6, a disk has issue and mdadm command shows /dev/md5 was active, degraded state .

# mdadm -Q --detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Wed Apr 12 09:50:21 2017
Raid Level : raid1
Array Size : 10485696 (10.00 GiB 10.74 GB)
Used Dev Size : 10485696 (10.00 GiB 10.74 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 5
Persistence : Superblock is persistent

Update Time : Mon Jun 5 14:47:09 2017
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : bdd69b24:0502f47d:04894333:532a878b
Events : 0.358889

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 21 1 active sync /dev/sdb5

The Solution

The degraded state will generally occur if actual physical drive is going down, a drive temporarily being not communicating properly Or synchronization haven’t worked after the replacement of the disk.

If the drive is still online then there is a ‘Rebuild’ or ‘Synchronize’ option still be available. If so, then it will scan the drive that the system presumes is correct, and make sure ALL of the data is copied to the other drive.

Engage the hardware vendor and verify why the synchronize or rebuild function didn’t work during the replacement of RAID.

-- After the replacement --

[root@localhost ~]# mdadm -Q --detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Wed Apr 12 09:50:21 2017
Raid Level : raid1
Array Size : 10485696 (10.00 GiB 10.74 GB)
Used Dev Size : 10485696 (10.00 GiB 10.74 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5
Persistence : Superblock is persistent

Update Time : Mon Jun 12 17:34:10 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : bdd69b24:0502f47d:04894333:532a878b
Events : 0.818295

Number Major Minor RaidDevice State
0 8 213 0 active sync /dev/sdn5
1 8 21 1 active sync /dev/sdb5
Related Post