On 05/25/2012 08:40 PM, Richard Elling wrote:
> See the soluion at https://www.illumos.org/issues/644
>  -- richard

And predictably, I'm back with another n00b question regarding this
array. I've put a pair of LSI-9200-8e controllers in the server and
attached the cables to the enclosure to each of the HBAs. As a result
(why?) I'm getting some really strange behavior:

 * piss poor performance (around 5MB/s per disk tops)
 * fmd(1M) running one core at near 100% saturation each time something
   writes or reads from the pool
 * using fmstat I noticed that its the eft module receiving hundreds of
   fault reports every second
 * fmd is flooded by multipath failover ereports like:

...
May 29 21:11:44.9408 ereport.io.scsi.cmd.disk.tran
May 29 21:11:44.9423 ereport.io.scsi.cmd.disk.tran
May 29 21:11:44.8474 ereport.io.scsi.cmd.disk.recovered
May 29 21:11:44.9455 ereport.io.scsi.cmd.disk.tran
May 29 21:11:44.9457 ereport.io.scsi.cmd.disk.dev.rqs.derr
May 29 21:11:44.9462 ereport.io.scsi.cmd.disk.tran
May 29 21:11:44.9527 ereport.io.scsi.cmd.disk.tran
May 29 21:11:44.9535 ereport.io.scsi.cmd.disk.dev.rqs.derr
May 29 21:11:44.6362 ereport.io.scsi.cmd.disk.recovered
...



I suspect that multipath is something not exactly very happy with my
Toshiba disks, but I have no idea what to do to make it work at least
somehow acceptably. I tried messing with scsi_vhci.conf to try and set
load-balance="none", change the scsi-vhci-failover-override for the
Toshiba disks to f_asym_lsi, flashing the latest as well as old firmware
in the cards, reseating them to other PCI-e slots, removing one cable
and even removing one whole HBA, unloading the eft fmd module etc, but
nothing helped so far and I'm sort of out of ideas. Anybody else got an
idea on what I might try?

Cheers,
--
Saso
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to