[EMAIL PROTECTED] wrote on 01/03/2007 04:21:00 PM:

> [EMAIL PROTECTED] wrote:
> > which is not the behavior I am seeing..
>
> Show me the output, and I can try to explain what you are seeing.
[9:36am] [~]:test% zfs create data/test
[9:36am] [~]:test% zfs set compression=on data/test
[9:37am] [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
[9:37am] [/data/test]:test% cp -R /data/fileblast/export/spare/images .
[9:40am] [/data/test]:test% zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
data/test             13.4G  14.2T  13.4G  /data/test
data/[EMAIL PROTECTED]       61.2K      -  66.6K  -
[9:40am]  [/data/test]:test% du -sk images
14022392      images
[9:40am]  [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
[9:41am]  [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
[9:41am]  [/data/test]:test% cd images/
[9:41am]  [/data/test/images]:test% cd fullres
[9:41am]  [/data/test/images/fullres]:test% rm -rf [A-H]*
[9:42am]  [/data/test/images/fullres]:test% zfs snapshot data/[EMAIL PROTECTED]
[9:42am]  [/data/test/images/fullres]:test% zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
data/test             13.4G  14.2T  6.54G  /data/test
data/[EMAIL PROTECTED]       61.2K      -  66.6K  -
data/[EMAIL PROTECTED]           0      -  13.4G  -
data/[EMAIL PROTECTED]           0      -  13.4G  -
data/[EMAIL PROTECTED]           0      -  6.54G  -
[9:42am]  [/data/test/images/fullres]:test% cd ..
[9:42am]  [/data/test/images]:test% cd ..
[9:42am]  [/data/test]:test% du -sk images
6862197 images

What I would expect to see is:
data/[EMAIL PROTECTED]           6.86G      -  13.4G

This shows me that snap3 now is the most specific "owner" of 6.86G of
delta. Please note that snap2 also uses this same delta data, but is not
the most specific (newest and last node) owner so it is not expected that
it would show this data in it's usage. Removing snaps from earliest to
snap3 would free the total "used" space from the snaps destroyed. I do see
that the REFER column does fluctuate down -- but as the test becomes more
complex (more writes/deletes between deltas and more deltas) I do not see
any way to correlate the usage of the snaps.  In my original tests that
were more complex I lost 50G from any view in this list, it was only
recoverable by deleting snaps until I hit the one that actually owned the
data. This is a major problem for me because our snap policy is such that
we need to keep as many snaps as possible (assuring a low water mark free
space) and have advance notice to snap culling.  Every other snapping
fs/vm/system I have used has been able to show delta size ownership for
snaps -- so this has never been an issue for us before...


>
> AFAIK, the manpage is accurate.  The space "used" by a snapshot is
exactly
> the amount of space that will be freed up when you run 'zfs destroy
> <snapshot>'.  Once that operation completes, 'zfs list' will show that
the
> space "used" by adjacent snapshots has changed as a result.
>
> Unfortunately, at this time there is no way to answer the question "how
> much space would be freed up if I were to delete these N snapshots".  We
> have some ideas on how to express this, but it will probably be some time

> before we are able to implement it.
>
> > If I have 100 snaps of a
> > filesystem that are relatively low delta churn and then delete half of
the
> > data out there I would expect to see that space go up in the used
column
> > for one of the snaps (in my tests cases I am deleting 50gb out of 100gb
> > filesystem and showing no usage increase on any of the snaps).
>
> That's probably because the 50GB that you deleted from the fs is shared
> among the snapshots, so it is still the case that deleting any one
snapshot
> will not free up much space.

No in my original test I had a few hundred snaps -- all with varying delta.
In the middle of the snap history I unlinked a substantial portion of the
data,  creating a large COW delta that should be easily spottable with
reporting tools.


>
> > I am
> > planning on having many many snaps on our filesystems and
programmatically
> > during old snaps as space is needed -- when zfs list does not attach
delta
> > usage to snaps it makes this impossible (without blindly deleting
snaps,
> > waiting an unspecified period until zfs list is updated and repeat).
>
> As I mentioned, you need only wait until 'zfs destroy' finishes to see
the
> updated accounting from 'zfs list'.

The problem is this is looking at the forest after you burn all the trees.
I want to be able to plan the cull before swinging the axe.

>
> > Also another thing that is not really specified in the documentation is
> > where this delta space usage would be listed
>
> What do you mean by "delta space usage"?  AFAIK, ZFS does not use that
term
> anywhere, which is why it is not documented :-)

By delta I mean the COW blocks that the snap represents as delta from the
previous snap and the next snap or live -- in other words what makes this
snap different from the previous or next snap.


>
> --matt
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to