This case was approved at today's psarc meeting.

--matt

Matthew Ahrens wrote:
> Template Version: @(#)sac_nextcase 1.68 02/23/09 SMI
> This information is Copyright 2009 Sun Microsystems
> 1. Introduction
>     1.1. Project/Component Working Name:
>        zfs snapshot holds
>     1.2. Name of Document Author/Supplier:
>        Author:  Chris Kirby
>     1.3  Date of This Document:
>       11 May, 2009
> 4. Technical Description
> 
> ZFS snapshot reference counts
> 
> A. SUMMARY:
> 
> This case adds support to ZFS for user-initiated reference counts on 
> snapshots.
> 
> B. PROBLEM:
> 
> Remote replication of ZFS datasets can result in different automatic
> snapshot policies on the two sides of a replication pair. For example,
> the sending side might want to keep five snapshots at one-minute intervals,
> while the receiving side might want to keep ten snapshots at one-minute
> intervals.
> 
> This can result in the older snapshots being destroyed inadvertently
> by zfs receive since they no longer exist on the sending side.
> 
> Also, when an administrator wants to destroy a snapshot for which there
> are clones, the admin must remember to come back and destroy the snapshot
> once the last clone has been destroyed. It would be handy to have
> an automated way of doing this deferred removal.
> 
> C. PROPOSED SOLUTION
> 
> C.1. Overview
> 
> This new facility would permit users or applications to place
> holds on snapshots that would prevent them from being deleted. Further,
> this facility would allow a snapshot with clones to be deleted pending the
> removal of the last clone using the new "zfs destroy -d" option.
> 
> Each snapshot will have an associated user-reference count, initialized
> to zero. This count increases by one whenever a hold is taken on the snapshot
> and decreases by one whenever a hold is released.
> 
> In the current model, in order for a snapshot to be destroyed with
> "zfs destroy", it must have no clones.  In the new model, the snapshot
> must also have a zero user-reference count.
> 
> In the example in section B, the receiving side could place holds
> on the ten snapshots it wants to keep, which will prevent the zfs receive
> operation from destroying them.
> 
> C.2. Version Compatibility
> 
> Using these features does not require a pool upgrade.
> Snapshots created on earlier pool versions will be eligible
> for user reference counts and will be treated as if their
> initial reference count is zero.
> 
> C.3. Changes to Existing Subcommands
> 
> C.3.1 zfs destroy -d
> 
> The "zfs destroy" subcommand now takes a -d option, applicable only
> to snapshots. If specified, the snapshot will be destroyed right away
> if and only if "zfs destroy" without the -d option would have deleted it.
> i.e. immediate deletion by "zfs destroy -d" requires the following:
> 
> 1) The snapshot has no clones.
>    AND
> 2) The user-initiated reference count is zero.
> 
> Otherwise, the snapshot is marked for deferred deletion, where it will exist
> as a normal (usable/visible) snapshot until both of the preconditions listed
> above are met, at which point the snapshot will be destroyed.
> 
> C.3.2. zfs destroy <clone>
> 
> When a snapshot's last clone is destroyed with "zfs destroy", if the
> snapshot's user-initiated reference count is zero; and the snapshot
> was previously marked for deferred destruction with "zfs destroy -d";
> the snapshot and the clone will be destroyed as part of a single
> transaction group.  That is, either they will both be destroyed
> or neither of them will be.
> 
> If the snapshot has an elevated user reference count or other children,
> the clone will be destroyed and the snapshot will remain.
> 
> C.3.3. zfs receive
> 
> In the current model, receiving a "zfs send -R" (replication) stream
> will result in a "zfs destroy" on any snapshots that don't exist on
> the sending side. In the new model, the receiving side will do a
> "zfs destroy -d" instead.
> 
> C.4. New Subcommands
> 
> Three new subcommands are added: "zfs hold", "zfs release", and "zfs holds":
> 
> C.4.1. zfs hold [-r] <tag> <snapshot> ...
> 
> The zfs hold subcommand adds a single reference, named with the
> <tag> argument, to the given snapshot(s). Each snapshot has its own
> tag namespace, and tags must be unique within that space.
> 
> As long as a hold exists on a snapshot, attempts to "zfs destroy"
> that snapshot will return EBUSY.
> 
> If the -r option is given, a hold with the given tag will be
> applied recursively to the snapshots of all descendent file systems.
> For example, given these snapshots:
> 
> pool/fs1 at snap1
> pool/fs1/fs2 at snap1
> pool/fs2 at snap1
> 
> This command:
> 
> # zfs hold -r mytag pool/fs1 at snap1
> 
> will place a hold named "mytag" on "pool/fs1 at snap1" and "pool/fs1/fs2 at 
> snap1",
> but not on "pool/fs2 at snap1".
> 
> The actions taken by -r are atomic; either all of the snapshots will
> be held with the given tag, or none of them will.
> 
> C.4.1.1. Delegated Administration
> 
> The "zfs hold" subcommand requires the new 'hold' delegation permission.
> 
> C.4.2. zfs release [-r] <tag> <snapshot> ...
> 
> The zfs release subcommand removes a single reference named by
> the <tag> argument from the named snapshot(s). The tag must already
> exist for each snapshot.
> 
> The -r option does a recursive release, similar to the recursive
> hold in section C.4.1.
> 
> When the last user reference has been removed from a snapshot,
> the snapshot will be automatically destroyed if and only if:
> 
> 1) The snapshot has no clones.
>    AND
> 2) The snapshot has already been marked for deferred deletion
> with 'zfs destroy -d'.
> 
> C.4.2.1. Delegated Administration
> 
> The "zfs release" subcommand requires the new 'release' delegation permission.
> 
> C.4.3. zfs holds [-r] <snapshot> ...
> 
> This subcommand lists all of the existing user references for
> the given snapshot(s) or family of snapshots. For example:
> 
> # zfs snapshot tank at snap2
> # zfs hold com.sun.sometag tank at snap2
> # zfs hold mytag tank at snap2
> # zfs holds tank at snap2
> tank at snap2:
>     TIMESTAMP                   TAG
>     Mon Apr 27 12:15:49 2009    com.sun.sometag
>     Mon Apr 27 12:16:02 2009    mytag
> 
> The -r option will list holds on the named snapshots of descendents, in
> addition to listing holds on the named dataset.
> 
> If "zfs hold -r com.sun.sometag pool at snap" was used to recursively apply
> a tag, "zfs holds -r pool at snap" will display those tags.
> 
> C.4.3.1. Delegated Administration
> 
> The "zfs holds" subcommand requires the new 'hold' delegation permission.
> 
> C.5. Tag Naming Convention
> 
> We will suggest (but not enforce) that users use reverse DNS form
> for user-reference tagging. e.g. "com.sun.fishworks.replication"
> 
> Tag names are subject to the same restrictions as snapshot names, i.e.
> they are limited to alphanumeric characters plus [-_.: ].
> 
> We will reserve tag names starting with "." for use by direct callers
> of the relevant libzfs routines.
> 
> Stability
> 
> This case requests patch/micro release binding.  The new interfaces are
> committed. 
> 
> 6. Resources and Schedule
>     6.4. Steering Committee requested information
>       6.4.1. Consolidation C-team Name:
>               ON
>     6.5. ARC review type: FastTrack
>     6.6. ARC Exposure: open


Reply via email to