Re: [zfs-discuss] raidz capacity osol vs freebsd

2010-07-18 Thread Craig Cory
When viewing a raidz|raidz1|raidz2 pool, 'zpool list|status' will report the
total "device" space; ie: 3 1TB drives in a raidz will show approx. 3TB space.
'zfs list' will show available FILESYSTEM space, ie: 3 1TB raidz disks, approx
2TB space.


Logic wrote:
> Ian Collins (i...@ianshome.com) wrote:
>> On 07/18/10 11:19 AM, marco wrote:
>>> *snip*
>>>
>>>
>> Yes, that is correct. zfs list reports usable space, which is 2 out of
>> the three drives (parity isn't confined to one device).
>>
>>> *snip*
>>>
>>>
>> Are you sure?  That result looks odd.  It is what I'd expect to see from
>> a stripe, rather than a raidz.
>>
>> What does "zpool iostat -v pool2" report?
>
> Hi Ian,
>
> I'm the friend with the osol release(snv_117) installed.
>
> The output you asked for is:
> % zpool iostat -v pool2
> capacity operationsbandwidth
> pool used  avail   read  write   read  write
> --  -  -  -  -  -  -
> pool2   4.26T  1.20T208 78  22.1M   409K
> raidz1  4.26T  1.20T208 78  22.1M   409K
> c2d1-  - 81 37  7.97M   208K
> c1d0-  - 82 38  7.85M   209K
> c2d0-  - 79 37  7.79M   209K
> --  -  -  -  -  -  -
>
> It really is a raidz, created a long time ago with build 27a, and I have been
> replacing the disks ever since, by removing one disk at a time and waiting for
> the
> resilvering to be done.
>
> greets Leon
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

+-+
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz capacity osol vs freebsd

2010-07-17 Thread Logic
Ian Collins (i...@ianshome.com) wrote:
> On 07/18/10 11:19 AM, marco wrote:
>> *snip*
>>
>>
> Yes, that is correct. zfs list reports usable space, which is 2 out of  
> the three drives (parity isn't confined to one device).
>
>> *snip*
>>
>>
> Are you sure?  That result looks odd.  It is what I'd expect to see from  
> a stripe, rather than a raidz.
>
> What does "zpool iostat -v pool2" report?

Hi Ian,

I'm the friend with the osol release(snv_117) installed.

The output you asked for is:
% zpool iostat -v pool2
capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
pool2   4.26T  1.20T208 78  22.1M   409K
raidz1  4.26T  1.20T208 78  22.1M   409K
c2d1-  - 81 37  7.97M   208K
c1d0-  - 82 38  7.85M   209K
c2d0-  - 79 37  7.79M   209K
--  -  -  -  -  -  -

It really is a raidz, created a long time ago with build 27a, and I have been
replacing the disks ever since, by removing one disk at a time and waiting for 
the
resilvering to be done.

greets Leon


pgpzboat6m99x.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz capacity osol vs freebsd

2010-07-17 Thread Ian Collins

On 07/18/10 11:19 AM, marco wrote:

Im seeing weird differences between 2 raidz pools, 1 created on a recent 
freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old 
osol build.
The raidz pool on the fbsd box is created from 3 2Tb sata drives.
The raidz pool on the osol box was created in the past from 3 smaller drives 
but all 3 drives have been replaced by 2Tb sata drives as well (using the 
autoexpand property).

The weird difference that I don't understand is that 'zfs list' on both systems 
is reporting very different available space.

FreeBSD raidz pool:

% zpool status -v pool1
pool: pool1
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0

errors: No known data errors

Since this is a new pool it automatically has been created as a version 15 pool

% zpool get version pool1
NAME PROPERTY VALUE SOURCE
pool1 version 15 default

% zfs get version pool1
NAME PROPERTY VALUE SOURCE
pool1 version 4 -

% zpool list pool1
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
pool1  5.44T   147K  5.44T 0%  ONLINE  -

% zfs list -r pool1
NAMEUSED  AVAIL  REFER  MOUNTPOINT
pool1  91.9K  3.56T  28.0K  /pool1<-- is this behavior correct, that 1 of the 3 
sata drives is then only used as a single parity disk and therefor not being added 
to the actual total available space ?

   
Yes, that is correct. zfs list reports usable space, which is 2 out of 
the three drives (parity isn't confined to one device).



now we switch to the osol built raidz pool:

% zpool status -v pool2
pool: pool2

NAME STATE READ WRITE CKSUM
pool2 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2d1 ONLINE 0 0 0
c1d0 ONLINE 0 0 0
c2d0 ONLINE 0 0 0

% zpool get version pool2
NAME PROPERTY VALUE SOURCE
pool2 version 14 local

% zfs get version pool2
NAME PROPERTY VALUE SOURCE
pool2 version 1 -

% zpool list pool2
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
pool2  5.46T  4.61T   870G84%  ONLINE  -

% zfs list -r pool2
NAME USED AVAIL REFER MOUNTPOINT
pool2 3.32T 2.06T 3.18T /export/pool2<-- clearly different reported AVAILABLE 
space on the osol box (3.32T + 2.06T = 5.38T which seems correct taking overhead 
into account so it should be a little less than what 'zpool list' is reporting as 
available space.

No compression is being used on either of the raidz pools.

   
Are you sure?  That result looks odd.  It is what I'd expect to see from 
a stripe, rather than a raidz.


What does "zpool iostat -v pool2" report?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] raidz capacity osol vs freebsd

2010-07-17 Thread marco
Im seeing weird differences between 2 raidz pools, 1 created on a recent 
freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old 
osol build.
The raidz pool on the fbsd box is created from 3 2Tb sata drives.
The raidz pool on the osol box was created in the past from 3 smaller drives 
but all 3 drives have been replaced by 2Tb sata drives as well (using the 
autoexpand property).

The weird difference that I don't understand is that 'zfs list' on both systems 
is reporting very different available space.

FreeBSD raidz pool:

% zpool status -v pool1
pool: pool1
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0

errors: No known data errors

Since this is a new pool it automatically has been created as a version 15 pool

% zpool get version pool1
NAME PROPERTY VALUE SOURCE
pool1 version 15 default

% zfs get version pool1
NAME PROPERTY VALUE SOURCE
pool1 version 4 -

% zpool list pool1   
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
pool1  5.44T   147K  5.44T 0%  ONLINE  -

% zfs list -r pool1
NAMEUSED  AVAIL  REFER  MOUNTPOINT
pool1  91.9K  3.56T  28.0K  /pool1 <-- is this behavior correct, that 1 of the 
3 sata drives is then only used as a single parity disk and therefor not being 
added to the actual total available space ?

now we switch to the osol built raidz pool:

% zpool status -v pool2
pool: pool2

NAME STATE READ WRITE CKSUM
pool2 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2d1 ONLINE 0 0 0
c1d0 ONLINE 0 0 0
c2d0 ONLINE 0 0 0

% zpool get version pool2
NAME PROPERTY VALUE SOURCE
pool2 version 14 local

% zfs get version pool2
NAME PROPERTY VALUE SOURCE
pool2 version 1 -

% zpool list pool2
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
pool2  5.46T  4.61T   870G84%  ONLINE  -

% zfs list -r pool2
NAME USED AVAIL REFER MOUNTPOINT
pool2 3.32T 2.06T 3.18T /export/pool2 <-- clearly different reported AVAILABLE 
space on the osol box (3.32T + 2.06T = 5.38T which seems correct taking 
overhead into account so it should be a little less than what 'zpool list' is 
reporting as available space.

No compression is being used on either of the raidz pools.

Hope someone can shed some light on this.

marco

-- 
Use UNIX or Die.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss