On Sat, Oct 31, 2015 at 10:37:55AM +0100, Remy Blank wrote:
> I'm trying to make sense of the disk usage reported by "zfs list".
> Here's what I get:
> 
> $ zfs list \
> -o name,used,avail,refer,usedbydataset,usedbychildren,usedbysnapshots \
> -t all
> 
> NAME                  USED  AVAIL  REFER  USEDDS  USEDCHILD  USEDSNAP
> pool/data            58.0G   718G  46.7G   46.7G          0     11.3G
> pool/data@2015-10-03     0      -  46.5G       -          -         -
> ...
> pool/data@2015-10-12     0      -  46.5G       -          -         -
> pool/data@2015-10-13  734M      -  46.7G       -          -         -
> pool/data@2015-10-14     0      -  46.7G       -          -         -
> ...
> pool/data@2015-10-28     0      -  46.7G       -          -         -
> pool/data@2015-10-29  755M      -  46.7G       -          -         -
> pool/data@2015-10-30  757M      -  46.7G       -          -         -
> pool/data@2015-10-31     0      -  46.7G       -          -         -
> 
> What I don't understand: I have 29 snapshots, only three of them use
> ~750M, but in total they take 11.3G. Where do the excess 9.1G come from?
> 

I'm going to go out on a limb and assume that zfs works in a similar way
to btrfs here (my quick googling shows that atleast in this case that may be
true). You then have to understand the numbers in the following way:

USEDSNAP refers to _data_ that is not in pool/data but in the snapshots.
The value for USED is _data_ that is only present in *this one* snapshot,
and not in any other snapshots or in pool/data. _data_ that is shared
between atleast two snapshots is not shown as USED because removing one of
the snapshots would not free it (it is still referenced by another
snapshot).

So in your case you have 3 snapshots which each have 750 MB exclusively,
and the remaining ~9 GB is in some way shared between all snapshots. So if
you were to delete any one of the 3 snapshots, you would free 750 MB. If
you were to delete all snapshots you would free 11.3 GB. But deleting any
one snapshot can change the USED count of any other snapshot.

This is one of the problems with copy-on-write filesystems - they make
disk space accounting more complicated especially with snapshots. Perhaps
zfs has something similar to btrfs qgroups which would allow you to group
snapshots in arbitrary ways to find out how much any group of snapshots
uses. Here's the example output of 'btrfs qgroup show' on my machine:

    qgroupid         rfer         excl parent  child
    --------         ----         ---- ------  -----
    0/5           0.00GiB      0.00GiB ---     ---
    0/262         6.37GiB      0.03GiB ---     ---
    0/265         3.52GiB      2.38GiB 1/0     ---
    0/270         6.38GiB      0.16GiB ---     ---
    0/275         0.00GiB      0.00GiB ---     ---
    0/276         4.38GiB      0.35GiB 1/0     ---
    0/277         0.00GiB      0.00GiB ---     ---
    0/278         4.98GiB      0.40GiB 1/1     ---
    0/279         4.62GiB      0.12GiB 1/0     ---
    0/285         5.59GiB      0.01GiB 1/0     ---
    0/286         5.69GiB      0.01GiB 1/0     ---
    0/289         6.34GiB      0.42GiB 1/1     ---
    0/290         6.35GiB      0.01GiB 1/0     ---
    0/291         6.38GiB      0.15GiB 1/1     ---
    1/0          10.02GiB      3.68GiB ---     
0/265,0/276,0/279,0/285,0/286,0/290
    1/1           7.20GiB      0.98GiB ---     0/278,0/289,0/291

0/262 is /
0/270 is /home
1/0 contains all snapshots of /
1/1 contains all snapshots of /home 

but I could also have grouped a subgroup of the snapshots in some other
way to find out how much space they take exclusively and how much space
they would free if they were to be deleted.

Reply via email to