Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Wade . Stuart






[EMAIL PROTECTED] wrote on 01/03/2007 04:21:00 PM:

 [EMAIL PROTECTED] wrote:
  which is not the behavior I am seeing..

 Show me the output, and I can try to explain what you are seeing.
[9:36am] [~]:test% zfs create data/test
[9:36am] [~]:test% zfs set compression=on data/test
[9:37am] [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
[9:37am] [/data/test]:test% cp -R /data/fileblast/export/spare/images .
[9:40am] [/data/test]:test% zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
data/test 13.4G  14.2T  13.4G  /data/test
data/[EMAIL PROTECTED]   61.2K  -  66.6K  -
[9:40am]  [/data/test]:test% du -sk images
14022392  images
[9:40am]  [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
[9:41am]  [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
[9:41am]  [/data/test]:test% cd images/
[9:41am]  [/data/test/images]:test% cd fullres
[9:41am]  [/data/test/images/fullres]:test% rm -rf [A-H]*
[9:42am]  [/data/test/images/fullres]:test% zfs snapshot data/[EMAIL PROTECTED]
[9:42am]  [/data/test/images/fullres]:test% zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
data/test 13.4G  14.2T  6.54G  /data/test
data/[EMAIL PROTECTED]   61.2K  -  66.6K  -
data/[EMAIL PROTECTED]   0  -  13.4G  -
data/[EMAIL PROTECTED]   0  -  13.4G  -
data/[EMAIL PROTECTED]   0  -  6.54G  -
[9:42am]  [/data/test/images/fullres]:test% cd ..
[9:42am]  [/data/test/images]:test% cd ..
[9:42am]  [/data/test]:test% du -sk images
6862197 images

What I would expect to see is:
data/[EMAIL PROTECTED]   6.86G  -  13.4G

This shows me that snap3 now is the most specific owner of 6.86G of
delta. Please note that snap2 also uses this same delta data, but is not
the most specific (newest and last node) owner so it is not expected that
it would show this data in it's usage. Removing snaps from earliest to
snap3 would free the total used space from the snaps destroyed. I do see
that the REFER column does fluctuate down -- but as the test becomes more
complex (more writes/deletes between deltas and more deltas) I do not see
any way to correlate the usage of the snaps.  In my original tests that
were more complex I lost 50G from any view in this list, it was only
recoverable by deleting snaps until I hit the one that actually owned the
data. This is a major problem for me because our snap policy is such that
we need to keep as many snaps as possible (assuring a low water mark free
space) and have advance notice to snap culling.  Every other snapping
fs/vm/system I have used has been able to show delta size ownership for
snaps -- so this has never been an issue for us before...



 AFAIK, the manpage is accurate.  The space used by a snapshot is
exactly
 the amount of space that will be freed up when you run 'zfs destroy
 snapshot'.  Once that operation completes, 'zfs list' will show that
the
 space used by adjacent snapshots has changed as a result.

 Unfortunately, at this time there is no way to answer the question how
 much space would be freed up if I were to delete these N snapshots.  We
 have some ideas on how to express this, but it will probably be some time

 before we are able to implement it.

  If I have 100 snaps of a
  filesystem that are relatively low delta churn and then delete half of
the
  data out there I would expect to see that space go up in the used
column
  for one of the snaps (in my tests cases I am deleting 50gb out of 100gb
  filesystem and showing no usage increase on any of the snaps).

 That's probably because the 50GB that you deleted from the fs is shared
 among the snapshots, so it is still the case that deleting any one
snapshot
 will not free up much space.

No in my original test I had a few hundred snaps -- all with varying delta.
In the middle of the snap history I unlinked a substantial portion of the
data,  creating a large COW delta that should be easily spottable with
reporting tools.



  I am
  planning on having many many snaps on our filesystems and
programmatically
  during old snaps as space is needed -- when zfs list does not attach
delta
  usage to snaps it makes this impossible (without blindly deleting
snaps,
  waiting an unspecified period until zfs list is updated and repeat).

 As I mentioned, you need only wait until 'zfs destroy' finishes to see
the
 updated accounting from 'zfs list'.

The problem is this is looking at the forest after you burn all the trees.
I want to be able to plan the cull before swinging the axe.


  Also another thing that is not really specified in the documentation is
  where this delta space usage would be listed

 What do you mean by delta space usage?  AFAIK, ZFS does not use that
term
 anywhere, which is why it is not documented :-)

By delta I mean the COW blocks that the snap represents as delta from the
previous snap and the next snap or live -- in other words what makes this
snap different from the previous or next snap.



 

Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Matthew Ahrens

Darren Dunham wrote:

Is the problem of displaying the potential space freed by multiple
destructions one of calculation (do you have to walk snapshot trees?) or
one of formatting and display?


Both, because you need to know for each snapshot, how much of the data 
it references was first referenced in each previous snapshot. 
Displaying these O(Nsnapshots ^ 2) data points is nontrivial.


As I mentioned, you need only wait until 'zfs destroy' finishes to see the 
updated accounting from 'zfs list'.


So if reaching a hard target was necessary, we could just delete
snapshots in age order, checking space after each, until the target
space became available.  But there would be no way to see beforehand how
many that would be, or if it was worth starting the process if some
snapshots are special and not available for deletion.


That's correct.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Matthew Ahrens

[EMAIL PROTECTED] wrote:

[9:40am]  [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
[9:41am]  [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]

...

[9:42am]  [/data/test/images/fullres]:test% zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
data/test 13.4G  14.2T  6.54G  /data/test
data/[EMAIL PROTECTED]   61.2K  -  66.6K  -
data/[EMAIL PROTECTED]   0  -  13.4G  -
data/[EMAIL PROTECTED]   0  -  13.4G  -
data/[EMAIL PROTECTED]   0  -  6.54G  -


When snap3 is deleted, no space will be freed (because it will still be 
referenced by snap2), therefore the space used by snap3 is 0.



What I would expect to see is:
data/[EMAIL PROTECTED]   6.86G  -  13.4G



This shows me that snap3 now is the most specific owner of 6.86G of
delta.


I understand that you *want* it to display a different value, but it is 
correctly showing the documented value.  How can we make the manpage 
better, to avoid this confusion in the future?



By delta I mean the COW blocks that the snap represents as delta from the
previous snap and the next snap or live -- in other words what makes this
snap different from the previous or next snap.


The best way I could come up with to define a bounded number of stats to 
express space usage for snapshots was the amount of space born and the 
amount killed.  Space born is the amount of space that is newly 
allocated in this snapshot (ie. not referenced in the prev snap, but 
referenced here).  Space killed is the amount of space that is newly 
freed in this snapshot (ie. referenced in the prev snap, but not 
referenced here).


We considered including these numbers, but decided against it, primarily 
because they can't actually answer the important question:  how much 
space will be freed if I delete these N snapshots?


You can't answer this question because you don't know *which* blocks 
were born and killed.  Consider 2 filesystems, A and B, both of which 
have lots of churn between every snapshot.  However, in A every block is 
referenced by exactly 2 snapshots, and in B every block is referenced by 
exactly 3 snapshots (excluding the first/last few).  The born/killed 
stats for A and B's snapshots may be the same, so there's no way to tell 
that to free up space in A, you must delete at least 2 adjacent snaps 
vs. for B, you must delete at least 3.


To really answer this question (how much space would be freed if I 
deleted these N snapshots), you need to know for each snapshot, how much 
of the space that it references was first referenced in each of the 
previous snapshots.  We're working on a way to compute and graphically 
display these values, which should make them relatively easy to interpret.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Wade . Stuart




Matthew,

 I really do appreciate this discussion, thank you for taking the time to
go over this with me.



Matthew Ahrens [EMAIL PROTECTED] wrote on 01/04/2007 01:49:00 PM:

 [EMAIL PROTECTED] wrote:
  [9:40am]  [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
  [9:41am]  [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
 ...
  [9:42am]  [/data/test/images/fullres]:test% zfs list
  NAME   USED  AVAIL  REFER  MOUNTPOINT
  data/test 13.4G  14.2T  6.54G  /data/test
  data/[EMAIL PROTECTED]   61.2K  -  66.6K  -
  data/[EMAIL PROTECTED]   0  -  13.4G  -
  data/[EMAIL PROTECTED]   0  -  13.4G  -
  data/[EMAIL PROTECTED]   0  -  6.54G  -

 When snap3 is deleted, no space will be freed (because it will still be
 referenced by snap2), therefore the space used by snap3 is 0.

Where does it show (on any of the snaps) that they are holding 6.8G of disk
space hostage? I understand snap2 and snap3 both share that data,  thats
why below I say that snap3, being the most specific owner should list the
6.8G as used -- showing that you need to delete snaps from 0 - snap3 to
free 6.8G ( more specifically deleing snap1 gets you 61.3k, snap2 0, snap
three 6.8G if you delete them in that order, and only guaranteed if you
delete them all).  If I just delete snap3 and leave snap2, I would expect
snap2 to be the most specific owner of that delta data showing 6.8G usage
now.



  What I would expect to see is:
  data/[EMAIL PROTECTED]   6.86G  -  13.4G

  This shows me that snap3 now is the most specific owner of 6.86G of
  delta.

 I understand that you *want* it to display a different value, but it is
 correctly showing the documented value.  How can we make the manpage
 better, to avoid this confusion in the future?



  The amount of space consumed by this dataset and all its
 descendants.  This  is the value that is checked against
 this dataset's quota and  reservation.  The  space  used
 does  not  include  this dataset's reservation, but does
 take into account the  reservations  of  any  descendant
 datasets.  The  amount  of space that a dataset consumes
 from its parent, as well as the  amount  of  space  that
 will  be freed if this dataset is recursively destroyed,
 is the greater of its space used and its reservation.


maybe kill this part below entirely and put the usage column for
snapshots is undetermined and may or may not reflect actual disk usage
associated with the snapshot especialy when blocks are freed between
snapshots.

 When  snapshots  (see  the  Snapshots   section)   are
 created,  their  space  is  initially shared between the
 snapshot and the file system, and possibly with previous
 snapshots.  As  the  file system changes, space that was
 previously shared becomes unique to  the  snapshot,  and
 counted  in  the  snapshot's  space  used. Additionally,
 deleting snapshots can  increase  the  amount  of  space
 unique to (and used by) other snapshots.







  By delta I mean the COW blocks that the snap represents as delta from
the
  previous snap and the next snap or live -- in other words what makes
this
  snap different from the previous or next snap.

 The best way I could come up with to define a bounded number of stats to
 express space usage for snapshots was the amount of space born and the
 amount killed.  Space born is the amount of space that is newly
 allocated in this snapshot (ie. not referenced in the prev snap, but
 referenced here).  Space killed is the amount of space that is newly
 freed in this snapshot (ie. referenced in the prev snap, but not
 referenced here).

For the common case you don't care about born or killed,  you only care
about blocks that are referenced on this snap that are not on the next (or
live if you are the last snap) -- these cover both new files/blocks and
deleted files/blocks that this snap now owns.

Assuming you can get an array of COW blocks for a given snapshot, and for
each snapshot in order of oldest to newest show the size_of ( COW blocks )
that are in snapshot N but do not exist in snapshot N+1 (set N-N+1) would
show me enough information to plan destroys.  The number here is how much
space would be freed by deleting snapshots from oldest one to this one in
sequence. This would allow us to plan deletions, or even see where peak
deltas happen on a long series of snaps. I understand this is not as nice
as doing a bidirectional difference test to gather more information such as
what would deleting a random snapshot in the series free if anything,  but
that seems to require much more overhead.  I think this covers the most
common usage, deleting older snaps before deleting random snaps in the
series or the newest snap first.

Even if this was a long operation that was executed via a list flag  it
would sure help.  I am 

Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Matthew Ahrens

[EMAIL PROTECTED] wrote:

Common case to me is, how much would be freed by deleting the snapshots in
order of age from oldest to newest always starting with the oldest.


That would be possible.  A given snapshot's space used by this and all 
prior snapshots would be the prev snap's used+prior + the next snap's 
killed (as defined in my previous email).  I can see that exposing the 
killed value may be useful in some circumstances.


However, I think that the more general question (ie. for arbitrary 
ranges of snapshots) would be required in many cases.  For example, if 
you had a more complicated snapshot policy, like keep every monthly 
snapshot, but delete any others to make space.  I imagine that such a 
policy would be fairly common.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-03 Thread Matthew Ahrens

[EMAIL PROTECTED] wrote:
which is not the behavior I am seeing..  


Show me the output, and I can try to explain what you are seeing.

AFAIK, the manpage is accurate.  The space used by a snapshot is exactly 
the amount of space that will be freed up when you run 'zfs destroy 
snapshot'.  Once that operation completes, 'zfs list' will show that the 
space used by adjacent snapshots has changed as a result.


Unfortunately, at this time there is no way to answer the question how 
much space would be freed up if I were to delete these N snapshots.  We 
have some ideas on how to express this, but it will probably be some time 
before we are able to implement it.



If I have 100 snaps of a
filesystem that are relatively low delta churn and then delete half of the
data out there I would expect to see that space go up in the used column
for one of the snaps (in my tests cases I am deleting 50gb out of 100gb
filesystem and showing no usage increase on any of the snaps).


That's probably because the 50GB that you deleted from the fs is shared 
among the snapshots, so it is still the case that deleting any one snapshot 
will not free up much space.



I am
planning on having many many snaps on our filesystems and programmatically
during old snaps as space is needed -- when zfs list does not attach delta
usage to snaps it makes this impossible (without blindly deleting snaps,
waiting an unspecified period until zfs list is updated and repeat).


As I mentioned, you need only wait until 'zfs destroy' finishes to see the 
updated accounting from 'zfs list'.



Also another thing that is not really specified in the documentation is
where this delta space usage would be listed


What do you mean by delta space usage?  AFAIK, ZFS does not use that term 
anywhere, which is why it is not documented :-)


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-02 Thread Wade . Stuart




I am bringing this up again with the hopes that more eye may be on the list
now then before the holidays..

the zfs man page lists the usage column as:
 used

 The amount of space consumed by this dataset and all its
 descendants.  This  is the value that is checked against
 this dataset's quota and  reservation.  The  space  used
 does  not  include  this dataset's reservation, but does
 take into account the  reservations  of  any  descendant
 datasets.  The  amount  of space that a dataset consumes
 from its parent, as well as the  amount  of  space  that
 will  be freed if this dataset is recursively destroyed,
 is the greater of its space used and its reservation.

 When  snapshots  (see  the  Snapshots   section)   are
 created,  their  space  is  initially shared between the
 snapshot and the file system, and possibly with previous
 snapshots.  As  the  file system changes, space that was
 previously shared becomes unique to  the  snapshot,  and
 counted  in  the  snapshot's  space  used. Additionally,
 deleting snapshots can  increase  the  amount  of  space
 unique to (and used by) other snapshots.

 The amount of space used, available, or referenced  does
 not  take  into account pending changes. Pending changes
 are generally accounted for within a few  seconds.  Com-
 mitting  a  change  to  a disk using fsync(3c) or O_SYNC
 does not necessarily  guarantee  that  the  space  usage
 information is updated immediately.


which is not the behavior I am seeing..  If I have 100 snaps of a
filesystem that are relatively low delta churn and then delete half of the
data out there I would expect to see that space go up in the used column
for one of the snaps (in my tests cases I am deleting 50gb out of 100gb
filesystem and showing no usage increase on any of the snaps).  I am
planning on having many many snaps on our filesystems and programmatically
during old snaps as space is needed -- when zfs list does not attach delta
usage to snaps it makes this impossible (without blindly deleting snaps,
waiting an unspecified period until zfs list is updated and repeat).  Is
this really the behavior that is expected, am I missing some more specific
usage data,  or is this some sort of bug?

Also another thing that is not really specified in the documentation is
where this delta space usage would be listed -- what makes sense to me
would be to have the oldest snap that owns the blocks take the usage hit
for them and move the usage hit up to the next snap as the oldest is
deleted.





 WSfc Hola folks,

 WSfc   I am new to the list, please redirect me if I am posting
 to the wrong
 WSfc location.  I am starting to use ZFS in production (Solaris x86 10U3
--
 WSfc 11/06) and I seem to be seeing unexpected behavior for zfs list and
 WSfc snapshots.  I create a filesystem (lets call it a/b where a isthe
pool).
 WSfc Now, if I store 100 gb of files on a/b and then snapshot a/[EMAIL 
 PROTECTED]
then
 WSfc delete about 50 gb of files from a/b -- I expect to see ~50
gbUSED on
 WSfc both a/b and a/[EMAIL PROTECTED] via zfs list output -- instead I only
 seem to see the
 WSfc delta block adds as USED (~20mb) on a/[EMAIL PROTECTED]  Is this
 correct behavior?
 WSfc how do you track the total delta blocks the snap is using vs other
snaps
 WSfc and live fs?

 This is almost[1] ok. When you delete a file from a file system you
 definitely expect to see that the file system allocated space reduced
 by about the same size.

 [1] the problem is that space consumed by snapshot isn't entirely
 correct and once you delete snapshot you'll actually get some more
 space than zfs list reported for that snapshot as used space. It's not
 a big deal but still it makes it harder to determine exactly how much
 space is allocated for snapshots for a given file system.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-02 Thread Wade . Stuart




Sorry a few corrections, and inserts..

 which is not the behavior I am seeing..  If I have 100 snaps of a
 filesystem that are relatively low delta churn and then delete half of
the
 data out there I would expect to see that space go up in the used column
 for one of the snaps (in my tests cases I am deleting 50gb out of 100gb
 filesystem and showing no usage increase on any of the snaps).  I am
 planning on having many many snaps on our filesystems and
programmatically
 during old snaps as space is needed -- when zfs list does not attach
delta
s/during/purging/

 usage to snaps it makes this impossible (without blindly deleting snaps,
 waiting an unspecified period until zfs list is updated and repeat).  Is
 this really the behavior that is expected, am I missing some more
specific
 usage data,  or is this some sort of bug?

 Also another thing that is not really specified in the documentation is
 where this delta space usage would be listed -- what makes sense to me
 would be to have the oldest snap that owns the blocks take the usage hit
 for them and move the usage hit up to the next snap as the oldest is
 deleted.

I guess the more I think about it the more it makes sence to show usage
charges on the newest snap that references the blocks as delta -- this way
you show the most specific snap that is reserving the space  (which snap
you need to delete all snaps before and including to free said space).




  WSfc Hola folks,
 
  WSfc   I am new to the list, please redirect me if I am posting
  to the wrong
  WSfc location.  I am starting to use ZFS in production (Solaris x86
10U3
 --
  WSfc 11/06) and I seem to be seeing unexpected behavior for zfs list
and
  WSfc snapshots.  I create a filesystem (lets call it a/b where a isthe
 pool).
  WSfc Now, if I store 100 gb of files on a/b and then snapshot a/[EMAIL 
  PROTECTED]
 then
  WSfc delete about 50 gb of files from a/b -- I expect to see ~50
 gbUSED on
  WSfc both a/b and a/[EMAIL PROTECTED] via zfs list output -- instead I only
  seem to see the
  WSfc delta block adds as USED (~20mb) on a/[EMAIL PROTECTED]  Is this
  correct behavior?
  WSfc how do you track the total delta blocks the snap is using vs
other
 snaps
  WSfc and live fs?
 
  This is almost[1] ok. When you delete a file from a file system you
  definitely expect to see that the file system allocated space reduced
  by about the same size.
 
  [1] the problem is that space consumed by snapshot isn't entirely
  correct and once you delete snapshot you'll actually get some more
  space than zfs list reported for that snapshot as used space. It's not
  a big deal but still it makes it harder to determine exactly how much
  space is allocated for snapshots for a given file system.
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs list and snapshots..

2006-12-22 Thread Robert Milkowski
Hello Wade,

Thursday, December 21, 2006, 10:15:56 PM, you wrote:





WSfc Hola folks,

WSfc   I am new to the list, please redirect me if I am posting to the 
wrong
WSfc location.  I am starting to use ZFS in production (Solaris x86 10U3 --
WSfc 11/06) and I seem to be seeing unexpected behavior for zfs list and
WSfc snapshots.  I create a filesystem (lets call it a/b where a is the pool).
WSfc Now, if I store 100 gb of files on a/b and then snapshot a/[EMAIL 
PROTECTED] then
WSfc delete about 50 gb of files from a/b -- I expect to see ~50 gb USED on
WSfc both a/b and a/[EMAIL PROTECTED] via zfs list output -- instead I only 
seem to see the
WSfc delta block adds as USED (~20mb) on a/[EMAIL PROTECTED]  Is this 
correct behavior?
WSfc how do you track the total delta blocks the snap is using vs other snaps
WSfc and live fs?

This is almost[1] ok. When you delete a file from a file system you
definitely expect to see that the file system allocated space reduced
by about the same size.

[1] the problem is that space consumed by snapshot isn't entirely
correct and once you delete snapshot you'll actually get some more
space than zfs list reported for that snapshot as used space. It's not
a big deal but still it makes it harder to determine exactly how much
space is allocated for snapshots for a given file system.



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs list and snapshots..

2006-12-22 Thread Wade . Stuart





[EMAIL PROTECTED] wrote on 12/22/2006 04:50:25 AM:

 Hello Wade,

 Thursday, December 21, 2006, 10:15:56 PM, you wrote:





 WSfc Hola folks,

 WSfc   I am new to the list, please redirect me if I am posting
 to the wrong
 WSfc location.  I am starting to use ZFS in production (Solaris x86 10U3
--
 WSfc 11/06) and I seem to be seeing unexpected behavior for zfs list and
 WSfc snapshots.  I create a filesystem (lets call it a/b where a isthe
pool).
 WSfc Now, if I store 100 gb of files on a/b and then snapshot a/[EMAIL 
 PROTECTED]
then
 WSfc delete about 50 gb of files from a/b -- I expect to see ~50
gbUSED on
 WSfc both a/b and a/[EMAIL PROTECTED] via zfs list output -- instead I only
 seem to see the
 WSfc delta block adds as USED (~20mb) on a/[EMAIL PROTECTED]  Is this
 correct behavior?
 WSfc how do you track the total delta blocks the snap is using vs other
snaps
 WSfc and live fs?

 This is almost[1] ok. When you delete a file from a file system you
 definitely expect to see that the file system allocated space reduced
 by about the same size.

 [1] the problem is that space consumed by snapshot isn't entirely
 correct and once you delete snapshot you'll actually get some more
 space than zfs list reported for that snapshot as used space. It's not
 a big deal but still it makes it harder to determine exactly how much
 space is allocated for snapshots for a given file system.


Well this is a problem for me,  in the case I showed above the snapshot
USAGE in zfs list is not only a little off on how much space it actually is
reserving for the delta blocks -- it is 50gb off out of a of 52.002gb
delta.  Now this is a test case -- where I actually know the delta.  When
this goes into production and I need to snap 6+ times a day on dynamic
filesystems,  how am I to programmatically determine how many snaps need to
fall off over time to keep the maximum amount of snapshots while
retaining enough free pool space for new live updates?  I find it hard to
believe that with all of the magic of zfs (it is a truly great leap in fs)
that I am expected to remove tail snaps until I free enough space on the
pool blindly.


I have to assume there is a more valid metric for how much pool is reserved
for a snap in time somewhere,  or this zfs list is reporting buggy data...




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs list and snapshots..

2006-12-21 Thread Wade . Stuart





Hola folks,

  I am new to the list, please redirect me if I am posting to the wrong
location.  I am starting to use ZFS in production (Solaris x86 10U3 --
11/06) and I seem to be seeing unexpected behavior for zfs list and
snapshots.  I create a filesystem (lets call it a/b where a is the pool).
Now, if I store 100 gb of files on a/b and then snapshot a/[EMAIL PROTECTED] 
then
delete about 50 gb of files from a/b -- I expect to see ~50 gb USED on
both a/b and a/[EMAIL PROTECTED] via zfs list output -- instead I only seem to 
see the
delta block adds as USED (~20mb) on a/[EMAIL PROTECTED]  Is this correct 
behavior?
how do you track the total delta blocks the snap is using vs other snaps
and live fs?

Thanks!
Wade Stuart

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss