[zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread Brian
Why does resilvering an entire disk, yield different amounts of data that was 
resilvered each time.
I have read that ZFS only resilvers what it needs to, but in the case of 
replacing an entire disk with another formatted clean disk, you would think the 
amount of data would be the same each time a disk is replaced with an empty 
formatted disk. 
I'm getting different results when viewing the 'zpool status' info (below)



For example ( I have a two-way mirror with a small file on it )
Raidz pools behave the same.


bash-3.2# zpool replace zp c2t27d0 c2t28d0
bash-3.2# zpool status
  pool: zp
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Mon Oct 26 09:46:21 2009
config:

NAME STATE READ WRITE CKSUM
zp   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t26d0  ONLINE   0 0 0
c2t28d0  ONLINE   0 0 0 [b] 73K resilvered[/b]

errors: No known data errors
bash-3.2# 
bash-3.2# zpool replace zp c2t28d0 c2t29d0
bash-3.2# zpool status
  pool: zp
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Mon Oct 26 09:46:52 2009
config:

NAME STATE READ WRITE CKSUM
zp   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t26d0  ONLINE   0 0 0
c2t29d0  ONLINE   0 0 0  [b]83.5K resilvered[/b]

errors: No known data errors
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread Bill Sommerfeld
On Mon, 2009-10-26 at 10:24 -0700, Brian wrote:
 Why does resilvering an entire disk, yield different amounts of data that was 
 resilvered each time.
 I have read that ZFS only resilvers what it needs to, but in the case of 
 replacing an entire disk with another formatted clean disk, you would think 
 the amount of data would be the same each time a disk is replaced with an 
 empty formatted disk. 
 I'm getting different results when viewing the 'zpool status' info (below)

replacing a disk adds an entry to the zpool history log, which
requires allocating blocks, which will change what's stored in the pool.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread A Darren Dunham
On Mon, Oct 26, 2009 at 10:24:16AM -0700, Brian wrote:
 Why does resilvering an entire disk, yield different amounts of data that was 
 resilvered each time.
 I have read that ZFS only resilvers what it needs to, but in the case
of replacing an entire disk with another formatted clean disk, you would
think the amount of data would be the same each time a disk is replaced
with an empty formatted disk. 

As long as the amount of data on the other side of the mirror is
identical, you should be correct.  In other words, it copies the in-use
blocks over.  It doesn't copy every block on the disk.

 I'm getting different results when viewing the 'zpool status' info (below)
 
 
 
 For example ( I have a two-way mirror with a small file on it )
 Raidz pools behave the same.
 
 
 bash-3.2# zpool replace zp c2t27d0 c2t28d0
 bash-3.2# zpool status
   pool: zp
  state: ONLINE
  scrub: resilver completed after 0h0m with 0 errors on Mon Oct 26 09:46:21 
 2009
 config:
 
 NAME STATE READ WRITE CKSUM
 zp   ONLINE   0 0 0
   mirror ONLINE   0 0 0
 c2t26d0  ONLINE   0 0 0
 c2t28d0  ONLINE   0 0 0 [b] 73K resilvered[/b]
 
 errors: No known data errors
 bash-3.2# 
 bash-3.2# zpool replace zp c2t28d0 c2t29d0
 bash-3.2# zpool status
   pool: zp
  state: ONLINE
  scrub: resilver completed after 0h0m with 0 errors on Mon Oct 26 09:46:52 
 2009
 config:
 
 NAME STATE READ WRITE CKSUM
 zp   ONLINE   0 0 0
   mirror ONLINE   0 0 0
 c2t26d0  ONLINE   0 0 0
 c2t29d0  ONLINE   0 0 0  [b]83.5K resilvered[/b]

The difference is only about 10K.  That's not much.  The live filesystem
is in flux on the disks as metadata trees are updated assuming you have
any activity at all (even reads that might be causing inode timestamps
to be rewritte).  I wouldn't consider this difference significant.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss