Seems your controller is actually doing only harm here, or am I missing
something?

On Feb 4, 2010 8:46 AM, "Karl Pielorz" <kpielorz_...@tdx.co.uk> wrote:


--On 04 February 2010 11:31 +0000 Karl Pielorz <kpielorz_...@tdx.co.uk>
wrote:

> What would happen...
A reply to my own post... I tried this out, when you make 'ad2' online
again, ZFS immediately logs a 'vdev corrupt' failure, and marks 'ad2' (which
at this point is a byte-for-byte copy of 'ad1' as it was being written to in
background) as 'FAULTED' with 'corrupted data'.

You can't "replace" it with itself at that point, but a detach on ad2, and
then attaching ad2 back to ad1 results in a resilver, and recovery.

So to answer my own question - from my tests it looks like you can do this,
and "get away with it". It's probably not ideal, but it does work.

A safer bet would be to detach the drive from the pool, and then re-attach
it (at which point ZFS assumes it's a new drive and probably ignores the
'mirror image' data that's on it).

-Karl

(The reason for testing this is because of a weird RAID setup I have where
if 'ad2' fails, and gets replaced - the RAID controller is going to mirror
'ad1' over to 'ad2' - and cannot be stopped. However, once the re-mirroring
is complete the RAID controller steps out the way, and allows raw access to
each disk in the mirror. Strange, a long story - but true).


_______________________________________________
zfs-discuss mailing list
zfs-disc...@opensolaris.or...
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to