It seems as though every time I scrub my mirror I get a few megabytes
of checksum errors on one disk (luckily corrected by the other). Is
there some way of tracking down a problem which might be persistent?
Check iostat -en or iostat -En devname. If the latter shows media errors, the
drive is
On Mon, Jun 18, 2012 at 3:55 PM, sol a...@yahoo.com wrote:
It seems as though every time I scrub my mirror I get a few megabytes of
checksum errors on one disk (luckily corrected by the other). Is there some
way of tracking down a problem which might be persistent?
Check the output of 'fmdump
Hello
It seems as though every time I scrub my mirror I get a few megabytes of
checksum errors on one disk (luckily corrected by the other). Is there some way
of tracking down a problem which might be persistent?
I wonder if it's anything to do with these messages which are constantly
Cheers, I did try that, but still got the same total on import - 2.73TB
I even thought I might have just made a mistake with the numbers, so I made a
sort of 'quarter scale model' in VMware and OSOL 2009.06, with 3x250G and
1x187G. That gave me a size of 744GB, which is *approx* 1/4 of what I
Try exporting and reimporting the pool. That has done the trick for me in the
past
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've had an interesting time with this over the past few days ...
After the resilver completed, I had the message no known data errors in a
zpool status.
I guess the title of my post should have been how permanent are permanent
errors?. Now, I don't know whether the action of completing the
Ok, the resilver has been restarted a number of times over the past few days
due to two main issues - a drive disconnecting itself, and power failure. I
think my troubles are 100% down to these environmental factors, but would like
some confidence that after the resilver has completed, if it
On 17.09.09 21:44, Chris Murray wrote:
Thanks David. Maybe I mis-understand how a replace works? When I added disk
E, and used 'zpool replace [A] [E]' (still can't remember those drive names),
I thought that disk A would still be part of the pool, and read from in order
to build the contents of
I can flesh this out with detail if needed, but a brief chain of events is:
1. RAIDZ1 zpool with drives A, B, C D (I don't have access to see original
drive names)
2. New disk E. Replaced A with E.
3. Part way through resilver, drive D was 'removed'
4. 700+ persistent errors detected, and lots
On Thu, September 17, 2009 04:29, Chris Murray wrote:
2. New disk E. Replaced A with E.
3. Part way through resilver, drive D was 'removed'
4. 700+ persistent errors detected, and lots of checksum errors on all
drives. Surprised by this - I thought the absence of one drive could be
Thanks David. Maybe I mis-understand how a replace works? When I added disk E,
and used 'zpool replace [A] [E]' (still can't remember those drive names), I
thought that disk A would still be part of the pool, and read from in order to
build the contents of disk E? Sort of like a safer way of
I've a non-mirrored zfs file systems which shows the status below. I saw
the thread in the archives about working this out but it looks like ZFS
messages have changed. How do I find out what file(s) this is?
[...]
errors: The following persistent errors have been detected:
12 matches
Mail list logo