On 2011-03-18 03:29, NeilBrown wrote:
> On Tue, 15 Mar 2011 12:45:19 +0100 Michael Gebetsroither <mich...@mgeb.org>

Hi Neil,

first, thx for your explanation and answer to my bugreport!

>> Package: mdadm
>> Version: 2.6.7.2-3
>> Severity: grave
>> Justification: causes non-serious data loss
>>
>> Hi,
>>
>> mdadm --grow /dev/mdX --size=max shreded the second raid1 array here.
>>
>> Commands where as following:
>> lvextend -L +10G vda/opt
>> lvextend -L +10G vdb/opt
> 
> md was not made aware of these changes to the sizes of the individual devices,
> so

But how should i tell mdadm about the changes?
mdadm --grow /dev/md2 --size=max
Seems like the logical choice to update the size.

>From manpage about -z, --size=:
"...
This value can be set with --grow for RAID level 1/4/5/6. If the array was 
created with
a size smaller than the currently active drives, the extra space can be 
accessed using --grow.
The size can be given as max which means to choose the largest size that fits 
on all current drives."

This reads exactly like i was using the command (to grow the raid to the maximum
size of the individual drives).
And it imho nowhere states that you are going to "destroy" your md metadata.

>> mdadm --grow /dev/md2 --size=max
> 
> This caused mdadm to do nothing as it always "knew" that all of the available
> space on the devices was already in use.

Afaik the md raid running (md2) knew that it was still using the old size of the
disks (i looked at /sys/block/ for it).
Shouldn't mdadm always read the metadata using the size of the running 
raid-device
and not from the individual devices (at least as long as the raid is running)?
But yea i see the problem, that the running raid array doesn't have to be the 
same
as described in the metadata.

>> After mdadm --stop /dev/md2 the array couldn't be reassambled.
>>
>> To fix this problem i simply recreated the array with:
>> # mdadm --create --assume-clean /dev/md2 -l1 -n2 /dev/vda/opt /dev/vdb/opt
>> mdadm: /dev/vda/opt appears to contain an ext2fs file system
>>     size=10485696K  mtime=Tue Feb 22 12:28:24 2011
>> mdadm: /dev/vdb/opt appears to contain an ext2fs file system
>>     size=10485696K  mtime=Tue Feb 22 12:28:24 2011
>> Continue creating array? yes
>> mdadm: array /dev/md2 started.
> 
> Excellent!

but what to do on e.g raid5 devices?

> What this is really a 'wishlist' issue.
> You would like mdadm to notice that the devices have changed size, and tell
> md, or maybe would like md to be able to be told immediately when it happens.
> Or something like that.

yea, `mdadm --grow /dev/md2 --size=max` growing the array to the real size
of the underlying devices would be nice.

michael



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to