Hi everyone,

We've building a storage system that should have about 2TB of storage
and good sequential write speed. The server side is a Sun X4200 running
Solaris 10u4 (plus yesterday's recommended patch cluster), the array we
bought is a Transtec Provigo 510 12-disk array. The disks are SATA, and
it's connected to the Sun through U320-scsi.

Now the raidbox was sold to us as doing JBOD and various other raid
levels, but JBOD turns out to mean 'create a single-disk stripe for
every drive'. Which works, after a fashion: When using a 12-drive zfs
with raidz and 1 hotspare, I get 132MB/s write performance, with raidz2
it's still 112MB/s. If instead I configure the array as a Raid-50
through the hardware raid controller, I can only manage 72MB/s.
So at a first glance, this seems a good case for zfs.

Unfortunately, if I then pull a disk from the zfs array, it will keep
trying to write to this disk, and will never activate the hot-spare. So
a zpool status will then show the pool as 'degraded', one drive marked
as unavailable - and the hot-spare still marked as available. Write
performance also drops to about 32MB/s.

If I then try to activate the hot-spare by hand (zpool replace <broken
disk> <hot spare>) the resilvering starts, but never makes it past 10% -
it seems to restart all the time. As this box is not in production yet,
and I'm the only user on it, I'm 100% sure that there is nothing
happening on the zfs filesystem during the resilvering - no reads,
writes and certainly no snapshots.

In /var/adm/messages, I see this message repeated several times each minute:
Nov 12 17:30:52 ddd scsi: [ID 107833 kern.warning] WARNING:
/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 (sd47):
Nov 12 17:30:52 ddd     offline or reservation conflict

Why isn't this enough for zfs to switch over to the hotspare?
I've tried disabling (setting to write-thru) the write-cache on the
array box, but that didn't make any difference to the behaviour either.

I'd appreciate any insights or hints on how to proceed with this -
should I even be trying to use zfs in this situation?

Regards, Paul Boven.
-- 
Paul Boven <[EMAIL PROTECTED]> +31 (0)521-596547
Unix/Linux/Networking specialist
Joint Institute for VLBI in Europe - www.jive.nl
VLBI - It's a fringe science
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to