On 07/18/10 11:19 AM, marco wrote:
Im seeing weird differences between 2 raidz pools, 1 created on a recent 
freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old 
osol build.
The raidz pool on the fbsd box is created from 3 2Tb sata drives.
The raidz pool on the osol box was created in the past from 3 smaller drives 
but all 3 drives have been replaced by 2Tb sata drives as well (using the 
autoexpand property).

The weird difference that I don't understand is that 'zfs list' on both systems 
is reporting very different available space.

FreeBSD raidz pool:

% zpool status -v pool1
pool: pool1
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0

errors: No known data errors

Since this is a new pool it automatically has been created as a version 15 pool

% zpool get version pool1
NAME PROPERTY VALUE SOURCE
pool1 version 15 default

% zfs get version pool1
NAME PROPERTY VALUE SOURCE
pool1 version 4 -

% zpool list pool1
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
pool1  5.44T   147K  5.44T     0%  ONLINE  -

% zfs list -r pool1
NAME    USED  AVAIL  REFER  MOUNTPOINT
pool1  91.9K  3.56T  28.0K  /pool1<-- is this behavior correct, that 1 of the 3 
sata drives is then only used as a single parity disk and therefor not being added 
to the actual total available space ?

Yes, that is correct. zfs list reports usable space, which is 2 out of the three drives (parity isn't confined to one device).

now we switch to the osol built raidz pool:

% zpool status -v pool2
pool: pool2

NAME STATE READ WRITE CKSUM
pool2 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2d1 ONLINE 0 0 0
c1d0 ONLINE 0 0 0
c2d0 ONLINE 0 0 0

% zpool get version pool2
NAME PROPERTY VALUE SOURCE
pool2 version 14 local

% zfs get version pool2
NAME PROPERTY VALUE SOURCE
pool2 version 1 -

% zpool list pool2
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
pool2  5.46T  4.61T   870G    84%  ONLINE  -

% zfs list -r pool2
NAME USED AVAIL REFER MOUNTPOINT
pool2 3.32T 2.06T 3.18T /export/pool2<-- clearly different reported AVAILABLE 
space on the osol box (3.32T + 2.06T = 5.38T which seems correct taking overhead 
into account so it should be a little less than what 'zpool list' is reporting as 
available space.

No compression is being used on either of the raidz pools.

Are you sure? That result looks odd. It is what I'd expect to see from a stripe, rather than a raidz.

What does "zpool iostat -v pool2" report?

--
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to