Hi all
I've just setup a new system with 11 x 7-drive RAIDz2 VDEVS. Running iozone to
benchmark and test the system, it seems it's running horribly slow. iostat -xd
shows one (or two) drives slowing down the bunch, and zfs-stats.sh (by
Alasdair, grab it from http://karlsbakk.net/zfs-stats.sh )
Hi all
I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs,
all on WD Black drives. Now, it seems two of these drives were bad, one of them
had a bunch of errors, the other was very slow. After zfs offlining these and
then zfs replacing them with online spares, resilver
On Sun, Dec 5, 2010 at 2:22 PM, Roy Sigurd Karlsbakk r...@karlsbakk.netwrote:
Hi all
I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2
VDEVs, all on WD Black drives. Now, it seems two of these drives were bad,
one of them had a bunch of errors, the other was very slow.
Hot spares are dedicated spares in the ZFS world. Until you replace
the actual bad drives, you will be running in a degraded state. The
idea is that spares are only used in an emergency. You are degraded
until your spares are no longer in use.
--Tim
Thanks for the clarification. Wouldn't
On 5 Dec 2010, at 16:06, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
Hot spares are dedicated spares in the ZFS world. Until you replace
the actual bad drives, you will be running in a degraded state. The
idea is that spares are only used in an emergency. You are degraded
until your
Hi,
Anyone who has experience with 3TB HDD in ZFS? Can solaris recognize this new
HDD?
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for the clarification. Wouldn't it be nice if ZFS could fail
over
to a spare and then allow the replacement as the new spare, as with
what
is done with most commercial hardware RAIDs?
If you use zpool detach to remove the disk that went bad, the spare
is promoted to a proper