Were you able to fix this problem in the end?
Unfortunately, no. I believe Matthew Ahrens took a look at it and couldn't
find the cause or how to fix it. We had to destroy the pool and re-create it
from scratch.
Fortunately, this was during the ZFS testing period, and no critically
On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS
filesystem would be a fair comparison.
What does your storage look like?
The storage looks like:
NAMESTATE READ WRITE CKSUM
tankONLINE 0
This is an old topic, discussed many times at length. However, I
still wonder if there are any workarounds to this issue except
disabling ZIL, since it makes ZFS over NFS almost unusable (it's a
whole magnitude slower). My understanding is that the ball is in the
hands of NFS due to
00
BYTES 0x19c000 0x11da000
00
EREAD 0
EWRITE0
ECKSUM0
This will show you and read/write/cksum errors.
Thanks,
George
Siegfried Nikolaivich wrote:
Hello All,
I am wondering if there is a way
Hello All,
I am wondering if there is a way to save the scrub results right before the
scrub is complete.
After upgrading to Solaris 10U3 I still have ZFS panicing right as the scrub
completes. The scrub results seem to be cleared when system boots back up,
so I never get a chance to see
Hello,
I am not sure if I am posting in the correct forum, but it seems somewhat zfs
related, so I thought I'd share it.
While the machine was idle, I started a scrub. Around the time the scrubbing
was supposed to be finished, the machine panicked.
This might be related to the 'metadata
On 24-Oct-06, at 9:11 PM, James McPherson wrote:
this error from the marvell88sx driver is of concern, The 10b8b decode
and disparity error messages make me think that you have a bad piece
of hardware. I hope it's not your controller but I can't tell
without more
data. You should have a
On 24-Oct-06, at 9:47 PM, James McPherson wrote:
On 10/25/06, Siegfried Nikolaivich [EMAIL PROTECTED] wrote:
And this is shown on the rest of the ports:
c0t?d0 Soft Errors: 6 Hard Errors: 0 Transport Errors: 0
Vendor: ATA Product: ST3320620AS Revision: CSerial No:
Size
On Mon, Oct 09, 2006 at 11:08:14PM -0700, Matthew
Ahrens wrote:
You may also want to try 'fmdump -eV' to get an idea
of what those
faults were.
I am not sure how to interpret the results, maybe you can help me. It looks
like the following with many more similar pages following:
% fmdump
Yeah, good catch. So this means that it seems to be
able to read the
label off of each device OK, and the labels look
good. I'm not sure
what else would cause us to be unable to open the
pool... Can you try
running 'zpool status -v'?
The command seems to return the same thing:
%
status: The pool metadata is corrupted and the pool
cannot be opened.
Is there at least a way to determine what caused this error? Is it a hardware
issue? Is it a possible defect in ZFS?
I don't think it's a hardware issue because it seems to be still working fine,
and has been for months.
11 matches
Mail list logo