I tested with zfs_vdev_max_pending=8
I hoped this should make the error messages
arcmsr0: too many outstanding commands (257 256)
go away but it did not.
zfs_vdev_max_pending=8 this should have only allowed 128 commands total to
be outstanding I would think (16 Drives * 8 = 128).
However
I've tried putting this in /etc/system and rebooting
set zfs:zfs_vdev_max_pending = 16
Are we sure that number equates to a scsi command?
Perhaps I should set it to 8 and see what happens.
(I have 256 scsi commands I can queue across 16 drives)
I still got these error messages in the log.
Jan
Thanks for the info.I'm running the Latest Firmware for my card: V1.46
with BOOT ROM Version V1.45
Could you tell me how you have your card configured? Are you using JBOD,
RAID, or Pass Through? What is your Max SATA mode set too? How may drives
do you have attached?
What is your ZFS
Here's an update:
I thought that the error message
arcmsr0: too many outstanding commands
might be due to a Scsi queue being over ran
The areca driver has
#*define*ARCMSR_MAX_OUTSTANDING_CMD
http://src.opensolaris.org/source/s?defs=ARCMSR_MAX_OUTSTANDING_CMD256
What might be
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block:
Thanks for the reply, I've also had issue with consumer class drives and other
raid cards.
The drives I have here (all 16 drives) are Seagate® Barracuda® ES enterprise
hard drives Model Number ST3500630NS
If the problem was with the drive I would expect the same behavior in both
solaris and