Karel Gardas @ 2017-06-15T09:07:39 +0200:
> On Thu, Jun 15, 2017 at 7:04 AM, LEVAI Daniel <l...@ecentrum.hu> wrote:
> > Thanks Karel for pointing this out, you are in fact right, and
> > nothing is wrong with the logging, I just forgot that I'm decrypting
> > that device 'automatically' in rc.local. And the kernel log was from
> > before this, hence the similar device names.  I still think that
> > nonetheless I should've gotten a degraded array that I can work with
> > (eg. rebuild).
> >
> > As a matter of fact I removed everything from the machine, and left
> > just the four drives of the array, then booted into bsd.rd from a
> > thumb drive.
> >
> > Strangest thing is, if I boot with the 'bad' (=failing) drive as
> > part of the array, softraid brings the volume online (albeit
> > degraded) and I can even decrypt/mount the volume and use it (only
> > one drive being bad in the array of RAID5).  If I remove/replace
> > said failing drive, I'm not getting a degraded volume, just the
> > error about the missing chunk and that it refuses to bring it
> > online.
> >
> > Either I completely misunderstood the whole idea about softraid and
> > the RAID5 setup (I mean, removing a device - failed or not -
> > shouldn't hinder the assembly of the array, right?), or I'm missing
> > something really obvious 8-/
> 
> I'm not sure, but I think that there is somewhat blury line in between
> the array creation and array attach. In fact OpenBSD is using the same
> command for this bioctl -c <x>. So I see you do have two possibilities
> probably:
> 
> 1) IMHO more safe. If you do have enough SATA ports, then attach both
> your failing drive and your new drive to the system. Boot. OpenBSD
> should detect and attach RAID5 in degraded state and then you will be
> able to perform your rebuild (if your failing drive is not offline,
> you can use bioctl to offline it)

So I'd have the degraded array with four disks, plus the new one not in
the array, but lying there in the background.
Let's say the failing drive is offline. Then to rebuild the degraded
array, I'd run
# bioctl -R /dev/newdisk sd8

This way, I basically add a new disk to the array, so I'll have a five
disk RAID5 setup (with a failing drive being the 'fourth')?

How do you think the behavior -- that now softraid won't assemble the
volume with a missing disk -- will change, after I remove the failing
drive again, leaving the array then with four but working drives?


> or
> 2) less safe (read completely untested and unverified by reading the
> code on my side). Use bioctl -c 5 -l <your drives including a new one>
> <etc> to attach the RAID5 array including the new drive. Please do
> *NOT* force this. See if bioctl complains for example about missing
> metadata or if it automatically detects new drive and start rebuild.

I've actually given this some thought before, but I swiftly discarded
it, -c being a 'create' option, and I didn't want to 'overwrite' my
existing RAID5 array.

But to be sure I'm on the same page, this way I won't have five disks
attached, only four (one of them being the new and clean one), and I'd
basically instruct softraid to 'recreate' the RAID5 array from the 3
original and 1 new drive?
The assumption is -- if I'm not mistaken -- that softraid would somehow
figure out that 3 of the four disks (specified by option '-l') are parts
of a RAID5 array, then it'd essentially 'add' the new disk as the
fourth, right?

> Generally speaking I'd use (1) since I used this in the past and had
> no issue with it.

Have you had the same problem, in that softraid wouldn't assemble the
RAID volume with a missing disk? How did you "remove" the failed device
from the RAID array (ie. you 'add' the new disk with -R during rebuild,
but how do you 'remove' the failed/offline drive with eg. bioctl)?


Daniel

Reply via email to