
Managing Drives in MDADM (CentOS, Ubuntu)
Software RAID is becoming very common with Linux based workstation and server environments. The following sections in this blog post will help provide guidance on viewing the current software RAID health (mdadm), as well as removing and re-adding drives for software RAID maintenance as needed to keep the software RAID devices healthy.
Viewing a software RAID health, determining status, if a drive needs to be replaced/re-added
Sample output for Two 256GB SSD with RAID1
Instead of the string [UU] you will see [U_] if you have a degraded RAID1 array
cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda1[0] sdb1[1] 194368 blocks super 1.2 [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 9757568 blocks super 1.2 [2/2] [UU] md2 : active raid1 sda3[2] sdb3[1] 239966016 blocks super 1.2 [2/2] [UU] |
Re-adding a drive to a MDADM array
If there is any RAID drive/partition is missing, you need to re-adding the drive to MDADM array.
Sample output of adding a drive(partition) sdb1 to the RAID(md0)
mdadam --add /dev/md0 /dev/sdb1 |
During the synchronization the output will look like the following:
cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10] md0 : active raid1 sda1[0] sdb1[1] 194368 blocks [2/1] [U_] [=>...................] recovery = 9.9% (19242/194368) finish=1.1min speed=127535K/sec |
Removing a drive from an MDADM array
- Unmount the array that containing the drive (ie sda1)
umount /dev/md0 |
- Shut down the array that containing the drive sda1
mdadm --stop /dev/md0 |
- Remove the drive from the array
mdadm --manage /dev/md0 --remove /dev/sda1 mdadm: hot removed /dev/sda1 |
- Zero the Superblock
mdadm --zero-superblock /dev/sda1 |

Managing Drives in MDADM (CentOS, Ubuntu)
Managing Drives in MDADM (CentOS, Ubuntu)
Software RAID is becoming very common with Linux based workstation and server environments. The following sections in this blog post will help provide guidance on viewing the current software RAID health (mdadm), as well as removing and re-adding drives for software RAID maintenance as needed to keep the software RAID devices healthy.
Viewing a software RAID health, determining status, if a drive needs to be replaced/re-added
Sample output for Two 256GB SSD with RAID1
Instead of the string [UU] you will see [U_] if you have a degraded RAID1 array
cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda1[0] sdb1[1] 194368 blocks super 1.2 [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 9757568 blocks super 1.2 [2/2] [UU] md2 : active raid1 sda3[2] sdb3[1] 239966016 blocks super 1.2 [2/2] [UU] |
Re-adding a drive to a MDADM array
If there is any RAID drive/partition is missing, you need to re-adding the drive to MDADM array.
Sample output of adding a drive(partition) sdb1 to the RAID(md0)
mdadam --add /dev/md0 /dev/sdb1 |
During the synchronization the output will look like the following:
cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10] md0 : active raid1 sda1[0] sdb1[1] 194368 blocks [2/1] [U_] [=>...................] recovery = 9.9% (19242/194368) finish=1.1min speed=127535K/sec |
Removing a drive from an MDADM array
- Unmount the array that containing the drive (ie sda1)
umount /dev/md0 |
- Shut down the array that containing the drive sda1
mdadm --stop /dev/md0 |
- Remove the drive from the array
mdadm --manage /dev/md0 --remove /dev/sda1 mdadm: hot removed /dev/sda1 |
- Zero the Superblock
mdadm --zero-superblock /dev/sda1 |