On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:

F. Wessels wrote:
Thanks posting this solution.

But I would like to point out that bug 6574286 "removing a slog doesn't work" still isn't resolved. A solution is under it's way, according to George Wilson. But in the mean time, IF something happens you might be in a lot of trouble. Even without some unfortunate incident you cannot for example export your data pool, pull the drives and leave the root pool.

In my case the slog slice wouldn't be the slog for the root pool, it would be the slog for a second data pool.

If the device went bad, I'd have to replace it, true. But if the device goes bad, then so did a good part of my root pool, and I'd have to replace that too.

Mirror the slog to match your mirrored root pool.

Don't get me wrong I would like such a setup a lot. But I'm not going to implement it until the slog can be removed or the pool be imported without the slog.

In the mean time can someone confirm that in such a case, root pool and zil in two slices and mirrored, that the write cache can be enabled with format? Only zfs is using the disk, but perhaps I'm wrong on this. There have been post's regarding enabling the write_cache. But I couldn't find a conclusive answer for the above scenario.


When you have just the root pool on a disk, ZFS won't enable the write cache by default. I think you can manually enable it but I don't know the dangers. Adding the slog shouldn't be any different. To be honest, I don't know how closely the write caching on a SSD matches what a moving disk has.

Write caches only help hard disks. Most (all?) SSDs do not have volatile write buffers. Volatile write buffers are another "bad thing" you can forget when you go to SSDs :-)
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to