On Fri, May 13, 2022 at 12:53 PM Ben Koenig <techkoe...@protonmail.com>
wrote:

> > I suspect this is a result of many years of different sysadmins replacing
> > drives as they failed. we probably had the idea of eventually increasing
> > the array's storage size once all the smaller drives were replaced with
> > larger ones.
> >
> > -wes
>
> People love to try that and it never works the way anyone expects...
>
>
I've done it successfully several times, though always intentionally. I
know there is some rule about raid devices needing to be within a certain
percentage of the same size (1%?), though I don't understand the details.

Is each drive really spread across 2 different arrays? The way I'm reading
> it each drive has 2 partitions, one of which is associated with md0 and the
> other md1.  My guess is that md1 was part of an LVM group and if you can
> figure out where the other piece is you can stitch it back together.
>

correct. /dev/sd[abcdef]1 is md0 and /dev/sd[abcdef]2 is md1.


> As far as data integrity goes, it's probably fine as long as the RAID was
> in a degraded state when you triggered the rebuild. What doesn't make sense
> is that the act of rebuilding caused it to become inaccessible. 2 disk
> failures on a RAID6 shouldn't be fine. Do you know what the array state was
> before you put new drives in?
>
>
the array wasn't just in a degraded state when the new drive was added, it
could not be started at all. the system had crashed and would not boot
fully. it would load the kernel and whatever it could load from md0, but
once it came time to mount the volumes from md1, it stalled. I had to go
into a liveboot environment, manually assemble the array (with --force),
and add the new drive. it synced, then I had to re-apply the correct name
and uuid to the array. then it was detected and started by the system on
md0. but still could not mount the lvm volumes.

-wes

Reply via email to