Re: [zfs-discuss] ZFS SCRUB

2010-08-10 Thread Cindy Swearingen

The ZFS best practices is here:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Run zpool scrub on a regular basis to identify data integrity problems.
If you have consumer-quality drives, consider a weekly scrubbing
schedule. If you have datacenter-quality drives, consider a monthly
scrubbing schedule.

Thanks,

Cindy

On 08/09/10 05:33, Mohammed Sadiq wrote:

Hi
 
Is it recommended to do scrub while the filesystem is mounted . How 
frequently do we have to do scrub and at what circumstances.
 
Please suggest.
 
Thanks/Regards

Mohammed Sadiq




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS SCRUB

2010-08-09 Thread Andrew Gabriel

Mohammed Sadiq wrote:

Hi
 
Is it recommended to do scrub while the filesystem is mounted . How 
frequently do we have to do scrub and at what circumstances.


You can scrub while the filesystems are mounted - most people do, 
there's no reason to unmount for for a scrub. (Scrub is pool level, not 
filesystem level.)


Scrub does noticeably slow the filesystem, so pick a time of low 
application load or a time when performance isn't critical. If it 
overruns into a busy period, you can cancel the scrub. Unfortunately, 
you can't pause and resume - there's an RFE for this, so if you cancel 
one you can't restart it from where it got to - it has to restart from 
the beginning.


You should scrub occasionally anyway. That's your check that data you 
haven't accessed in your application isn't rotting on the disks.


You should also do a scrub before you do a planned reduction of the pool 
redundancy (e.g. if you're going to detach a mirror side in order to 
attach a larger disk), most particularly if you are reducing the 
redundancy to nothing.


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS SCRUB

2010-08-09 Thread Edho P Arief
On Mon, Aug 9, 2010 at 6:33 PM, Mohammed Sadiq  wrote:
> Hi
>
> Is it recommended to do scrub while the filesystem is mounted .

yes

> How
> frequently do we have to do scrub and at what circumstances.
>

some people say weekly, some other monthly, some other, like myself,
whenever remember to. Usually done when the load is at its lightest.



-- 
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS SCRUB

2010-08-09 Thread Mohammed Sadiq
Hi

Is it recommended to do scrub while the filesystem is mounted . How
frequently do we have to do scrub and at what circumstances.

Please suggest.

Thanks/Regards
Mohammed Sadiq
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS scrub time estimate up to 50% off

2008-07-21 Thread Bob Friesenhahn
While using Solaris 10U4 with all patches applied, I notice that the 
estimated time to complete a scrub is way off:

scrub in progress, 68.81% done, 0h7m to go

When it is about 50% done, the estimated time to complete is something 
like 9 minutes when it really needs over 30 minutes more.

Does the OpenSolaris version of ZFS exhibit this anomaly?

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs scrub taking very very long

2008-03-08 Thread Orvar Korvar
I am using b68 or b69 (can't remember) and the scrubs take forever. They never 
finish. 

It turned out to be a bug in this opensolaris version. I posted that question 
here somewhere, and it was a confirmed bug.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs scrub taking very very long

2008-03-07 Thread Justin Vassallo
Each partition in the pool is 320G. the disks only have one partition / disk

Each disk is connected to a separate USB2 port on a X4200 M2.

The scrub took around 6 hrs to complete, which I am told is acceptable (I
was not aware it takes takes so long when I first posted; thanks to those
who replied).

What I must note is that the file systems on these pools were terribly slow
and unusable during the whole scrub, which I understand is not normal.
During this time, disks were 30% busy (which is normal for these disks), LWP
switch was quite low at 10k/second and the CPU was relaxed at <10%. Should I
think that I have an IO bottleneck, or would this fs locking be considered
as weird zfs behavior.

Thanks
justin

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Vincent Fox
Sent: 06 March 2008 18:53
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs scrub taking very very long

Insufficient data.

How big is the pool? How much stored?

Are the external drives all on the same USB bus?

I am switching to eSATA for my next external drive setup as both USB 2.0 and
firewire are just too fricking slow for the large drives these days.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs scrub taking very very long

2008-03-06 Thread Vincent Fox
Insufficient data.

How big is the pool? How much stored?

Are the external drives all on the same USB bus?

I am switching to eSATA for my next external drive setup as both USB 2.0 and 
firewire are just too fricking slow for the large drives these days.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs scrub taking very very long

2008-03-06 Thread Tomas Ögren
On 06 March, 2008 - Justin Vassallo sent me these 12K bytes:

> Hello,
> 
>  
> 
> I ran a zpool scrub on 2 zpools. one located on internal sas drives, the
> second on external, USB SATA drives.
> 
>  
> 
> The internal pool finished scrubbing in no time, while the external pool is
> taking incredibly long.

Are you taking periodical snapshots? Currently that will restart
scrubs..

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs scrub taking very very long

2008-03-06 Thread Justin Vassallo
Hello,

 

I ran a zpool scrub on 2 zpools. one located on internal sas drives, the
second on external, USB SATA drives.

 

The internal pool finished scrubbing in no time, while the external pool is
taking incredibly long.

 

Typical data transfer rate to this external pool is 80MB/s.

 

Any help would be greatly appreciated.

 

justin

 

# zpool status external

  pool: external

 state: ONLINE

 scrub: scrub in progress, 0.01% done, 161h29m to go

config:

 

NAME  STATE READ WRITE CKSUM

external  ONLINE   0 0 0

  raidz2  ONLINE   0 0 0

c5t0d0s0  ONLINE   0 0 0

c6t0d0s0  ONLINE   0 0 0

c7t0d0s0  ONLINE   0 0 0

 

errors: No known data errors



smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zfs scrub and fjge interface on Prime Power

2006-11-19 Thread eric.bourgi

We have noticed that each time we initiate a scrub on our zpool , one of
the newtwork interface (fge1) on our PP650 goes down.
(always that interface, all others are fine )
If we cancel the scrub , a simple  ifconfig down and ifconfig up of the
interface fixes the problem !!!
When the interface goes down , no messages issued in  /var/adm/messages
!!

>uname -a 
SunOS tsmsun1 5.10 Generic_118833-17 sun4us sparc FJSV,GPUZC-M
>ifconfig -a
lo0: flags=2001000849 mtu
8232 index 1
inet 127.0.0.1 netmask ff00 
fjge0: flags=1000863 mtu
1500 index 3
inet 151.163.5.19 netmask fe00 broadcast 151.163.5.255
ether 0:0:e:25:2c:ea 
fjge1: flags=1000863 mtu
1500 index 4
inet 151.163.81.65 netmask ff00 broadcast 151.163.81.255
ether 0:0:e:25:2c:ea 
fjge2: flags=1000843 mtu 1500 index
2
inet 151.163.115.42 netmask  broadcast 151.163.255.255
ether 0:0:e:25:2c:ea 
fjge3: flags=1000863 mtu
1500 index 5
inet 172.20.0.5 netmask ff00 broadcast 172.20.0.255
ether 0:0:e:25:2c:ea 
hme0: flags=1000863 mtu
1500 index 6
inet 151.163.121.184 netmask ffc0 broadcast 151.163.121.191
ether 0:0:e:25:2c:ea 

>zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
z_tsmsun1_pool 18.0T   9.13T   8.91T50%  ONLINE -
tsmsun1 - /home/root >zpool status 
  pool: z_tsmsun1_pool
 state: ONLINE
 scrub: scrub stopped with 0 errors on Sat Nov 18 02:00:02 2006
config:

NAMESTATE READ WRITE
CKSUM
z_tsmsun1_pool  ONLINE   0 0
0
  c22t600C0FF000678A0A86F3D901d0s0  ONLINE   0 0
0
  c22t600C0FF000678A0A86F3D900d0s0  ONLINE   0 0
0
  c22t600C0FF00068190A86F3D901d0s0  ONLINE   0 0
0
  c22t600C0FF00068190A86F3D900d0s0  ONLINE   0 0
0
  c22t600C0FF00068191A598ED500d0s0  ONLINE   0 0
0
  c22t600C0FF000678A1A598ED500d0s0  ONLINE   0 0
0
  c22t600C0FF00068191A598ED501d0s0  ONLINE   0 0
0
  c22t600C0FF000681943A7223100d0s0  ONLINE   0 0
0
  c22t600C0FF000681943A7223101d0ONLINE   0 0
0
  c22t600C0FF000681932BBD24400d0s0  ONLINE   0 0
0
  c22t600C0FF000681932BBD24401d0s0  ONLINE   0 0
0
  c22t600C0FF000678A43A7223100d0s0  ONLINE   0 0
0
  c22t600C0FF000678A2055211B01d0s0  ONLINE   0 0
0
  c22t600C0FF000678A2055211B00d0s0  ONLINE   0 0
0
  c22t600C0FF000678A32BBD24401d0s0  ONLINE   0 0
0
  c22t600C0FF000678A1A598ED501d0s0  ONLINE   0 0
0
  c22t600C0FF000678A32BBD24400d0s0  ONLINE   0 0
0
  c22t600C0FF000678A43A7223101d0s0  ONLINE   0 0
0
  c22t600C0FF00068192055211B00d0s0  ONLINE   0 0
0
  c22t600C0FF00068192055211B01d0s0  ONLINE   0 0
0
  c22t600C0FF000678A44F3D81B00d0s0  ONLINE   0 0
0
  c22t600C0FF000678A44F3D81B01d0s0  ONLINE   0 0
0
  c22t600C0FF000681944F3D81B00d0s0  ONLINE   0 0
0
  c22t600C0FF000681944F3D81B01d0s0  ONLINE   0 0
0

errors: No known data errors
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs scrub question

2006-09-20 Thread Wee Yeh Tan

Peter,

I'll first check /var/adm/messages to see if there are any poblems
with the following disks:

c10t600A0B800011730E66F444C5EE7Ed0
c10t600A0B800011730E66F644C5EE96d0
c10t600A0B800011652EE5CF44C5EEA7d0
c10t600A0B800011730E66F844C5EEBAd0

The checksum errors seems to concentrate around these.

--
Just me,
Wire ...


On 9/20/06, Peter Wilk <[EMAIL PROTECTED]> wrote:

All,

IHAC that had  called in an issue with the following description.

I have a system which has two ZFS storage pools.  One of the pools is on
hardware which is having problems, so I wanted to start the system with
only one of the two ZFS storage pools.  How do NOT mount the second ZFS
storage pool?

engineer response:
ZFS has a number of commands for this. If you want to make it so the
system does not use the pool, you can offline the pool until you have a
chance to repair it. From the ZFS manual:

zpool offline  
zpool offline myzfspool c1t1d0

However, note you may not be able to offline it if it is the only device
in the pool, in which case you would have to add another device so that
data can be transferred until the bad drive is replaced, otherwise there
would be data loss.

You may want to check the status with:

zpool status 

Depending on what you find here, you may be able to remove the bad
device and replace it or you may have to try and back up the data,
destroy the pool and recreate it on the new device. If you reference the
ZFS Administration Manual, the full information is listed on pages 135-140.


customer response:
Since all my disks for the entire pool were not available, I ended up
exporting the entire zpool.  At that point I could bring up the system
with the zpool which was operational.

After we got the storage subsystem fixed we brought the second zpool
back online with zpool import.

Since we had problems with the storage we ran zpool scrub on the pool.
Checksum errors were found, on a single device in each of the raidz
groups.  I have been told that ZFS will correct these errors.  After
zpool scrub ran to completion, we cleared the errors, and we are now in
the process of running it again.  There are several hours to go, but it
has already flagged additional checksum errors.  I would have thought
the original run of zpool scrub would have fixed these.


Not even understanding ZFS fully and just learning of this command zfs
scrub. I believe zfs scrub is similiar to an fsck. It appears that zfs
scrub is not resolving the issue..any suggestion would be helpful
zfs

Thanks

Peter

Please respond to me directly for I may  not be on this alias



Customer was told to do the following commands and I believe it did not
clear up his issue..see below

zpool status -v (should list the status and what errors it found)
zpool scrub (one more time to see if more errors are found)
zpool status -v (this should show us an after picture)

Sorry I left that out, yes, you would want to run a zpool clear before
the scrub. You may also want to output a zpool status after the clear to
make sure the count cleared out. When you run the commands, you may just
want to do a script session to capture everything.

latest email from customer

After
I am attaching some output files which show "zpool scrub" running
multiple times and catching checksums each time.  Remember, I have
now run zpool scrub about 3 - 4 times.




=
 __
/_/\
   /_\\ \Peter Wilk -  OS/Security Support
  /_\ \\ /   Sun Microsystems
 /_/ \/ / /  1 Network Drive,  P.O Box 4004
/_/ /   \//\ Burlington, Massachusetts 01803-0904
\_\//\   / / 1-800-USA-4SUN, opt 1, opt 1,#
 \_/ / /\ /  Email: [EMAIL PROTECTED]
  \_/ \\ \   =
   \_\ \\
\_\/

=

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss