Here is what happened...

I recently upgraded a system running 6 x 2TB HDDs with an EFI
motherboard and 6 x 3TB HDDs. The final step in the process was growing
the RAID-5 array using metadata v0.90 (/dev/md1) consisting of 6
component devices of just under 2TB each to use devices of just under
3TB each. At the time I forgot about the limitation that metadata 0.90
does not support component devices over 2TB. However, the grow completed
successfully and I was using the system just fine for about 2 eeks. LVM2
is using /dev/md1 as a physical volume for volume group radagast and
pvdisplay showed that /dev/md1 has a size of 13.64 TiB. I had been
writing data to it regularly and I believe that I had well exceeded the
original size of the old array (about 9.4TB). All was fine until a few
days ago when I rebooted the system. The system booted back up to a
point, when it could not mount some of the file systems that were on
logical volumes on /dev/md1. So, it seems that the mdadm --grow
operation was successful, but upon boot the mdadm --assemble operation
completed, but not with the same size array as after the grow operation.
Here is some relavant information:

$ sudo pvdisplay /dev/md1
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               radagast
  PV Size               13.64 TiB / not usable 2.81 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              3576738
  Free PE               561570
  Allocated PE          3015168
  PV UUID               0ay0Ai-jcws-yPAR-DP83-Fha5-LZDO-341dQt

Detail of /dev/md1 after the attempt to reboot. Unfortunately I don't
have any detail of the array prior to the grow or reboot. However, the
pvdisplay above does show the 13.64 TiB size of the array after the grow
operation.

$ sudo mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90
  Creation Time : Wed May 20 17:19:50 2009
     Raid Level : raid5
     Array Size : 3912903680 (3731.64 GiB 4006.81 GB)
  Used Dev Size : 782580736 (746.33 GiB 801.36 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Fri Jun 10 00:35:43 2011
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 256K

           UUID : 6650f3f8:19abfca8:e368bf24:bd0fce41
         Events : 0.6539960

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       67        2      active sync   /dev/sde3
       3       8       51        3      active sync   /dev/sdd3
       4       8       35        4      active sync   /dev/sdc3
       5       8       83        5      active sync   /dev/sdf3


Code:

# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90
  Creation Time : Wed May 20 17:19:50 2009
     Raid Level : raid5
     Array Size : 3912903680 (3731.64 GiB 4006.81 GB)
  Used Dev Size : 782580736 (746.33 GiB 801.36 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jun  7 02:19:18 2011
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 256K

           UUID : 6650f3f8:19abfca8:e368bf24:bd0fce41
         Events : 0.6539960

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       67        2      active sync   /dev/sde3
       3       8       51        3      active sync   /dev/sdd3
       4       8       35        4      active sync   /dev/sdc3
       5       8       83        5      active sync   /dev/sdf3

And here is the partitioning layout of each drive. /dev/sd[abcdef] have
identical partitioning layouts.

$ sudo parted /dev/sda print
Model: ATA Hitachi HDS72303 (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1066kB  1049kB                     bios_grub
 2      1066kB  207MB   206MB   ext3               raid
 3      207MB   3001GB  3000GB                     raid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/794963

Title:
  madam allows growing an array beyond metadata size limitations

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/794963/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to