Re: [OmniOS-discuss] zfs diskusage

2014-05-16 Thread Tobias Oetiker
Hi Jim,

Yesterday Jim Klimov wrote:

> Do you have volumes or other sparrse reservations? If there are
> no allocated bytes, that's in zpool free space (times the
> overhead factor of raidz parities), but if space is reserved for
> these datasets - that is not in free space (not available for
> writing) of the pool's root filesystem dataset.  With this
> consideration in mind, do the numbers fit?

they do indeed ... see my other post :-)

thanks
tobi


-- 
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
www.oetiker.ch t...@oetiker.ch +41 62 775 9902

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs diskusage

2014-05-16 Thread Jim Klimov
16 мая 2014 г. 11:07:52 CEST, Tobias Oetiker  пишет:
>Hi Dan,
>
>Yesterday Dan McDonald wrote:
>
>> On May 15, 2014, at 8:05 AM, Tobias Oetiker  wrote:
>>
>> > Today we were out of diskspace on one of our pools ... a few
>removed
>> > snapshots later all is fine, except that I find that I don't realy
>> > understand the numbers ... can anyone elighten me?
>> >
>> > # zpool list fast
>> > NAME   SIZE  ALLOC   FREE  EXPANDSZCAP  DEDUP  HEALTH  ALTROOT
>> > fast  4.34T  1.74T  2.61T -39%  1.22x  ONLINE  -
>> >
>> > # zfs list fast
>> > NAME   USED  AVAIL  REFER  MOUNTPOINT
>> > fast  2.59T   716G  78.5K  /fast
>> >
>> > Why does the 'zpool list' claim that 2.61T is free (61%)
>> > while 'zfs list' sees 716G free (27%)
>> >
>> > I know there is raidz2 and compression so the numbers  don't match
>> > up, but I don't understand why the ratio is so different between
>> > the two.
>>
>> Richard Elling addressed something similar on a different thread:
>>
>> http://lists.omniti.com/pipermail/omnios-discuss/2014-May/002609.html
>>
>> You're running raidz2, and that's likely why you're seeing the
>discrepency between zpool and zfs.
>>
>> Try Richard's advice of running "zfs list -o space" for the
>breakdown.
>
>well that looks nicer, but the numbers don't change ... the way it
>is, it seems very difficult to judge how much space is still
>available ... 61% free vs 27% free seems to be quite a big
>difference in my eyes.
>
>cheers
>tobi
>
>
>> Dan
>>
>>

Do you have volumes or other sparrse reservations? If there are no allocated 
bytes, that's in zpool free space (times the overhead factor of raidz 
parities), but if space is reserved for these datasets - that is not in free 
space (not available for writing) of the pool's root filesystem dataset.
With this consideration in mind, do the numbers fit?

Alternatively,  i have a similar questionably sized pool - a raidz1 over 4*4tb 
disks, with about 1tb free in zfs list and 2.7tb unallocated in zpool list, and 
almost no volumes (none big anyway). I did not yet look deeper (i.e. knto 
quotas ans reservations), but now that i remembered of it - something does not 
add up too ;)

//jim
--
Typos courtesy of K-9 Mail on my Samsung Android
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs diskusage (solved)

2014-05-16 Thread Tobias Oetiker
Yesterday Tobias Oetiker wrote:

> Today we were out of diskspace on one of our pools ... a few removed
> snapshots later all is fine, except that I find that I don't realy
> understand the numbers ... can anyone elighten me?
>
> # zpool list fast
> NAME   SIZE  ALLOC   FREE  EXPANDSZCAP  DEDUP  HEALTH  ALTROOT
> fast  4.34T  1.74T  2.61T -39%  1.22x  ONLINE  -
>
> # zfs list fast
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> fast  2.59T   716G  78.5K  /fast
>
> Why does the 'zpool list' claim that 2.61T is free (61%)
> while 'zfs list' sees 716G free (27%)
>
> I know there is raidz2 and compression so the numbers  don't match
> up, but I don't understand why the ratio is so different between
> the two.
>
> I checked on other filesystems and there the view from zpool and
> zfs look much more similar.

answering my own question with some help from dan and irc:

a) zpool shows the actual free space on the disks ... blocks not
   allocated. Since it is a raidz2 pool, we loose 2 disks for
   redundancy.

b) zfs shows the space realy used ... though this does not realy
   add up yet.

c) The missing piece was the zvols ... zfs by default
   does thick provisioning why you create a volume ...
   so creating an 200G zvol reduces the available space in zfs by
   200G (and then some) without actually allocating any space ...
   so the free space in zpool does not change ...

d) (Not sure this is true, but I guess) In connection with
   compression a volume will in all likelyhood never occupie the
   space allocated.

what fell out of this for me, is that I switched the less important
volumes to thin provisioning ... (it could be done with the -s
switch at creation time):

# zfs set refreservation=0 pool-2/randomstuff/unimportant-volume

cheers
tobi

-- 
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
www.oetiker.ch t...@oetiker.ch +41 62 775 9902

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs diskusage

2014-05-16 Thread Tobias Oetiker
Hi Dan,

Yesterday Dan McDonald wrote:

> On May 15, 2014, at 8:05 AM, Tobias Oetiker  wrote:
>
> > Today we were out of diskspace on one of our pools ... a few removed
> > snapshots later all is fine, except that I find that I don't realy
> > understand the numbers ... can anyone elighten me?
> >
> > # zpool list fast
> > NAME   SIZE  ALLOC   FREE  EXPANDSZCAP  DEDUP  HEALTH  ALTROOT
> > fast  4.34T  1.74T  2.61T -39%  1.22x  ONLINE  -
> >
> > # zfs list fast
> > NAME   USED  AVAIL  REFER  MOUNTPOINT
> > fast  2.59T   716G  78.5K  /fast
> >
> > Why does the 'zpool list' claim that 2.61T is free (61%)
> > while 'zfs list' sees 716G free (27%)
> >
> > I know there is raidz2 and compression so the numbers  don't match
> > up, but I don't understand why the ratio is so different between
> > the two.
>
> Richard Elling addressed something similar on a different thread:
>
> http://lists.omniti.com/pipermail/omnios-discuss/2014-May/002609.html
>
> You're running raidz2, and that's likely why you're seeing the discrepency 
> between zpool and zfs.
>
> Try Richard's advice of running "zfs list -o space" for the breakdown.

well that looks nicer, but the numbers don't change ... the way it
is, it seems very difficult to judge how much space is still
available ... 61% free vs 27% free seems to be quite a big
difference in my eyes.

cheers
tobi


> Dan
>
>

-- 
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
www.oetiker.ch t...@oetiker.ch +41 62 775 9902

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs diskusage

2014-05-15 Thread Dan McDonald

On May 15, 2014, at 8:05 AM, Tobias Oetiker  wrote:

> Today we were out of diskspace on one of our pools ... a few removed
> snapshots later all is fine, except that I find that I don't realy
> understand the numbers ... can anyone elighten me?
> 
> # zpool list fast
> NAME   SIZE  ALLOC   FREE  EXPANDSZCAP  DEDUP  HEALTH  ALTROOT
> fast  4.34T  1.74T  2.61T -39%  1.22x  ONLINE  -
> 
> # zfs list fast
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> fast  2.59T   716G  78.5K  /fast
> 
> Why does the 'zpool list' claim that 2.61T is free (61%)
> while 'zfs list' sees 716G free (27%)
> 
> I know there is raidz2 and compression so the numbers  don't match
> up, but I don't understand why the ratio is so different between
> the two.

Richard Elling addressed something similar on a different thread:

http://lists.omniti.com/pipermail/omnios-discuss/2014-May/002609.html

You're running raidz2, and that's likely why you're seeing the discrepency 
between zpool and zfs.

Try Richard's advice of running "zfs list -o space" for the breakdown.

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] zfs diskusage

2014-05-15 Thread Tobias Oetiker
Today we were out of diskspace on one of our pools ... a few removed
snapshots later all is fine, except that I find that I don't realy
understand the numbers ... can anyone elighten me?

# zpool list fast
NAME   SIZE  ALLOC   FREE  EXPANDSZCAP  DEDUP  HEALTH  ALTROOT
fast  4.34T  1.74T  2.61T -39%  1.22x  ONLINE  -

# zfs list fast
NAME   USED  AVAIL  REFER  MOUNTPOINT
fast  2.59T   716G  78.5K  /fast

Why does the 'zpool list' claim that 2.61T is free (61%)
while 'zfs list' sees 716G free (27%)

I know there is raidz2 and compression so the numbers  don't match
up, but I don't understand why the ratio is so different between
the two.

I checked on other filesystems and there the view from zpool and
zfs look much more similar.

cheers
tobi

ps. the dedup ratio is a leftover from a time when I tried dedup.

$ zpool get all fast
NAME  PROPERTY   VALUE  SOURCE
fast  size   4.34T  -
fast  capacity   39%-
fast  altroot-  default
fast  health ONLINE -
fast  guid   16524146496274345089   default
fast  version-  default
fast  bootfs -  default
fast  delegation on default
fast  autoreplaceoffdefault
fast  cachefile  -  default
fast  failmode   wait   default
fast  listsnapshots  offdefault
fast  autoexpand offdefault
fast  dedupditto 0  default
fast  dedupratio 1.22x  -
fast  free   2.61T  -
fast  allocated  1.74T  -
fast  readonly   off-
fast  comment-  default
fast  expandsize 0  -
fast  freeing0  default
fast  feature@async_destroy  enabledlocal
fast  feature@empty_bpobjactive local
fast  feature@lz4_compress   active local
fast  feature@multi_vdev_crash_dump  enabledlocal
fast  feature@spacemap_histogram active local
fast  feature@extensible_dataset enabledlocal

$ zfs get all fast
NAME  PROPERTY  VALUE  SOURCE
fast  type  filesystem -
fast  creation  Fri Jan  4 17:19 2013  -
fast  used  2.59T  -
fast  available 716G   -
fast  referenced78.5K  -
fast  compressratio 1.81x  -
fast  mounted   yes-
fast  quota none   default
fast  reservation   none   default
fast  recordsize128K   default
fast  mountpoint/fast  default
fast  sharenfs  offdefault
fast  checksum  on default
fast  compression   lz4local
fast  atime on default
fast  devices   on default
fast  exec  on default
fast  setuidon default
fast  readonly  offdefault
fast  zoned offdefault
fast  snapdir   hidden default
fast  aclmode   discarddefault
fast  aclinheritrestricted default
fast  canmount  on default
fast  xattr on default
fast  copies1  default
fast  version   5  -
fast  utf8only  off-
fast  normalization none   -
fast  casesensitivity   sensitive  -
fast  vscan offdefault
fast  nbmandoffdefault
fast  sharesmb  offdefault
fast  refquota  none   default
fast  refreservationnone   default
f