I've got the box on eval, and just pushing through its paces. Ideally I would
be replicating to another x4500, but I don't have another one and didn't want
to use 22 disks for another pool.
This message posted from opensolaris.org
___
zfs-discuss ma
Michael Kucharski wrote:
> We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have
> the files system mounted over v5 krb5 NFS and accessed directly. The pool
> is a 20TB pool and is using . There are three filesystems, backup, test
> and home. Test has about 20 million files and
I'm not sure. But when I would re-run a scrub, I got the errors at the same
block numbers, which indicated that the disk was really bad. It wouldn't hurt
to make the entry in the /etc/system file, reboot, and then try the scrub
again. If the problem disappears then it is a driver bug.
Gary
Thanks. Looks like I have this bug. Is it a hardware problem combined with a
software problem?
Oct 9 09:35:43 zeta1 sata: [ID 801593 kern.notice] NOTICE: /[EMAIL
PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]:
Oct 9 09:35:43 zeta1 port 3: device reset
Oct 9 09:35:43 zeta1 s
Michael bigfoot.com> writes:
>
> Excellent.
>
> Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING:
> /pci 2,0/pci1022,7458 8/pci11ab,11ab 1/disk 2,0 (sd13):
> Oct 9 13:36:01 zeta1 Error for Command: readError
Level: Retryable
>
> Scrubbing now.
This is on
Excellent.
Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL
PROTECTED],0 (sd13):
Oct 9 13:36:01 zeta1 Error for Command: readError Level:
Retryable
Scrubbing now.
Big thanks gg
Are there any clues in the logs? I have had a similar problem when a disk bad
block was uncovered by zfs. I've also seen this when using the Silicon Image
driver without the recommended patch.
The former became evident when I ran a scrub. I saw the SCSI timeout errors pop
up in the "kern" sys
Every day we see pause times of sometime 60 seconds to read 1K of a file for
local reads as well as NFS in a test setup.
We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the
files system mounted over v5 krb5 NFS and accessed directly. The pool is a
20TB pool and is usi