I tested with zfs_vdev_max_pending=8
I hoped this should make the error messages
arcmsr0: too many outstanding commands (257 256)
go away but it did not.
zfs_vdev_max_pending=8 this should have only allowed 128 commands total to
be outstanding I would think (16 Drives * 8 = 128).
However
I've tried putting this in /etc/system and rebooting
set zfs:zfs_vdev_max_pending = 16
Are we sure that number equates to a scsi command?
Perhaps I should set it to 8 and see what happens.
(I have 256 scsi commands I can queue across 16 drives)
I still got these error messages in the log.
Jan
Charles Wright wrote:
I've tried putting this in /etc/system and rebooting
set zfs:zfs_vdev_max_pending = 16
You can change this on the fly, without rebooting.
See the mdb command at:
There is an update in build 105, but it is only pertaining to the Raid
Management tool:
Issues Resolved:
BUG/RFE:6776690http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6776690Areca
raid management util doesn't work on solaris
Files Changed:
Thanks for the info.I'm running the Latest Firmware for my card: V1.46
with BOOT ROM Version V1.45
Could you tell me how you have your card configured? Are you using JBOD,
RAID, or Pass Through? What is your Max SATA mode set too? How may drives
do you have attached?
What is your ZFS
Here's an update:
I thought that the error message
arcmsr0: too many outstanding commands
might be due to a Scsi queue being over ran
The areca driver has
#*define*ARCMSR_MAX_OUTSTANDING_CMD
http://src.opensolaris.org/source/s?defs=ARCMSR_MAX_OUTSTANDING_CMD256
What might be
Charles Wright wrote:
Here's an update:
I thought that the error message
arcmsr0: too many outstanding commands
might be due to a Scsi queue being over ran
Rather than messing with sd_max_throttle, you might try
changing the number of iops ZFS will queue to a vdev.
IMHO this is
Just to let everybody know, I'm in touch with Charles and we're
working on this problem offline. We'll report back to the list
when we've got something to talk about.
James
On Wed, 14 Jan 2009 08:37:44 -0800 (PST)
Charles Wright char...@asc.edu wrote:
Here's an update:
I thought that the
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block:
Just a hunch.. but what kind of drives are you using? Many of the raid card
vendors report that consumer class drives are incompatible with their cards
because the drives will spend much longer trying to recover from failure than
the enterprise class drives will. This causes the card to think
Thanks for the reply, I've also had issue with consumer class drives and other
raid cards.
The drives I have here (all 16 drives) are Seagate® Barracuda® ES enterprise
hard drives Model Number ST3500630NS
If the problem was with the drive I would expect the same behavior in both
solaris and
Charles Wright wrote:
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
[snip litany of errors]
I had similar problems on a 1120 card with 2008.05
I upgraded to 2008.11 and the something*.16 sun areca
12 matches
Mail list logo