On Thu, May 6, 2010 at 1:18 AM, Edward Ned Harvey <solar...@nedharvey.com>wrote:

> > From the information I've been reading about the loss of a ZIL device,
> What the heck?  Didn't I just answer that question?
> I know I said this is answered in ZFS Best Practices Guide.
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Sepa
> rate_Log_Devices
>
> Prior to pool version 19, if you have an unmirrored log device that fails,
> your whole pool is permanently lost.
> Prior to pool version 19, mirroring the log device is highly recommended.
> In pool version 19 or greater, if an unmirrored log device fails during
> operation, the system reverts to the default behavior, using blocks from
> the
> main storage pool for the ZIL, just as if the log device had been
> gracefully
> removed via the "zpool remove" command.
>


This week I've had a bad experience replacing a SSD device that was in a
hardware RAID-1 volume. While rebuilding, the source SSD failed and the
volume was brought off-line by the controller.

The server kept working just fine but seemed to have switched from the
30-second interval to all writes going directly to the disks. I could
confirm this with iostat.

We've had some compatibility issues between LSI MegaRAID cards and a few
MTRON SSDs and I didn't believe the SSD had really died. So I brought it
off-line and back on-line and everything started to work.

ZFS showed the log device c3t1d0 as removed. After the RAID-1 volume was
back I replaced that device with itself and a resilver process started. I
don't know what it was resilvering against but it took 2h10min. I should
have probably tried a zpool offline/online too.

So I think if a log device fails AND you've to import your pool later
(server rebooted, etc)... then you lost your pool (prior to version 19).
Right ?

This happened on OpenSolaris 2009.6.

-- 
Giovanni
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to