[CC'ing openafs-devel]

One of the areas of concern that I have is the use of the vos -clone
option and the impact it has on the resulting volume timestamps.

I will reply in more depth later but I am unable to do so now.

Jeffrey Altman


On 12/3/2014 12:17 PM, Jeffrey Hutzelman wrote:
> On Wed, 2014-12-03 at 06:43 -0500, Jeffrey Altman wrote:
> 
>>> The updateDate is set to the same as the copy date when a volume is
>>> newly created.  It is propagated during cloning, recloning, and
>>> forwarding.  However, as with the creation date, recent versions of 'vos
>>> restore' allow the user to select between setting a new date, using the
>>> one in the dump, or preserving a target's existing date.  The default
>>> for 'vos restore' is to use the time in the dump, but any other callers
>>> of UV_RestoreVolume will default to setting a new time.  This last is
>>> actually a bug, introduced in 2004 when the flags to 'vos restore' were
>>> added.  Prior to that time, the only available behavior was to use the
>>> updateDate contained in the dump.
>>
>> The bug introduced to UV_RestoreVolume in 2004 will be triggered when
>> volumes are restored via butc or when the Norbert perl binding is used.
> 
> Yup.  And potentially by other backup tools as well, if they use
> UV_RestoreVolume and don't do anything else to set the dates (which is
> unlikely, especially for updateDate, because the ability to set it via
> an RPC was introduced at the same time as the vos options described
> above, and the bug itself.
> 
> We really should get this bug fixed, now that we know about it.
> 
> 
>>> Additionally, the updateDate is modified whenever the modify timestamp
>>> on a vnode changes.  However, it is _not_ modified when volume metadata
>>> changes.
>>
>> I consider it a bug that the updateDate is not modified when volume
>> metadata that is reflected in a dump is modified.  I believe that
>> OpenAFS should fix this.
> 
> It's not that straightforward, and I think it's a topic for discussion
> on openafs-devel.  An argument could certainly be made that anything
> that touches a volume, whether it appears in a dump or not, should bump
> the update stamp.  However, that's more than a bug fix; it's a semantic
> change.  The updateDate currently describes the _contents_ of the
> volume; you want to change it to describe the volume itself.  That could
> be important, especially since it may be used in a variety of situations
> for creating and processing incremental dumps, as an indicator of which
> vnodes need to be processed.
> 
> 
>>> For a backup volume, the backupDate (if set) is clearly safe, and is
>>> also the newest possible safe time, because it explicitly records the
>>> snapshot date.  However, it may be unset (zero).  In such a case, the
>>> best we can do is to either use the backupDate or treat it like any
>>> other clone, which means using the creationDate (see below).  The only
>>> difference in handling between backup volumes and other clones are that
>>> a backup volume is always the result of a cloning and that its
>>> backupDate is set.
>>
>> The result of 11433+11468 is that the backupDate is always used for
>> backup volumes.  To handle the case where backupDate might be unset, we
>> could check for V_backupDate(vp) == 0 and use creationDate in that
>> situation.
> 
> Agreed.  I think that check would even be a reasonable addition to
> 11468.
> 
> 
>>> The gotcha case has to do with volume which are themselves the product
>>> of a restore done using 'vos restore' or UV_RestoreVolume().  In such
>>> volumes, the creationDate has usually been reset to the time of the
>>> restore (always, with older AFS), and with versions since 2004 or so,
>>> the updateDate may also have been reset.  This means that the normally
>>> "correct" timestamps may in fact be dangerously new.
>>>
>>> I think we can disregard the case where the updateDate is incorrect,
>>> especially if we fix the bug which causes UV_RestoreVolume() to reset it
>>> by default.  Admins who explicitly do something else should know what
>>> they are getting themselves into.  This means that we can basically
>>> assume that it is OK to use updateDate for RW volumes.
>>
>> Sites that use the OpenAFS backup tooling might have triggered this bug
>> and that is a problem.
> 
> It is, but in practice it's probably not.  A volume restored as RW can
> be modified simply by modifying it, at which point any information about
> the updateDate of the volume it came from is destroyed anyway.  For the
> situation to be problematic, the following sequence has to occur:
> 
> (1) dump from volume A
> (2) volume A changes
> (3) restore the dump from (1) to volume B
> (4) dump from volume B
> ... arbitrarily long delay ...
> (5) incremental dump from volume A
> (6) apply the dump from (4) to volume B
> (7) incremental dump from volume B.
> 
> The restore in (3) will set the update date on volume B to something
> that is newer than the change in (2).  Then the dump in (4) will contain
> an incorrect to time.  However, this only matters if the change in (2)
> is eventually going to make its way into volume B.  That means you have
> to do further incremental transfers from A to B after volume B is
> online.
> 
> Further, note that an actual change to volume B anytime between (3) and
> (4) would have the same effect as the bug and would also mean B is no
> longer an appropriate target for incremental transfer from A.  Thus, for
> the problem to actually matter, you have to not plan on making any
> changes to the RW volume B until after doing any more incremental
> restores from A.
> 
> And of course, the problem we're discussing is the to time in dump (4),
> and how it affects the selection of the from time in (7).  So if you
> don't start doing backups of B until all the incremental transfers are
> done, it doesn't matter.
> 
> Finally, let's not even get started on what happens if you begin doing
> dumps of clones of B, all of which bear timestamps with no relation to
> anything that went on in A, because A is not their parent.
> 
> 
> In short, a newly restored RW volume is a new volume, not a clone of the
> original, and must be treated as such.  That means there are going to be
> issues no matter what we do.  However, operationally, I don't think it's
> a problem, because in most cases such a volume either won't be backed up
> or won't receive additional incremental transfers once the initial
> series of restores is done, and especially not from dumps made after the
> new volume is created.
> 
> IOW, this doesn't affect temporary restores that aren't backed up, and
> it doesn't affect permanent restores to replace a lost volume (since the
> original no longer exists).
> 
> 
>>> Unfortunately, we cannot so easily disregard the issue with
>>> creationDate, since that is reset by default during a restore and always
>>> has been.  I'm not entirely sure what to do here -- creationDate is
>>> certainly better than copyDate, but it also might be wrong.
>>
>> The behavior of "vos restore" and UV_RestoreVolume is might have another
>> set of bugs since the default behavior of setting the creationDate
>> should take into account whether the dump is from a RW, a RO, or a BK
>> and whether or not it is being restored as a RW or a clone.
> 
> 
> 
>> When performing a restore to a RO (vos restore -readonly) it is not safe
>> to set the creationDate to the restore timestamp or perhaps even to keep
>> the existing volume's creationDate.  Instead creationDate should be set
>> to one of the dump's timestamps.
> 
> Perhaps, but AFS has more or less always done this, so we ought to cope
> with it as best we can, and we ought to think about it carefully before
> we change it.  I'll comment on the specifics below, but perhaps this is
> a conversation we should continue on openafs-devel.
> 
> 
>>   If the dump was taken from a BK then
>> the backupDate will be non-zero.  If so, using that value for the
>> creationDate seems like a good choice.
> 
> This seems like a good idea, but only if the volume type in the dump is
> actually BKVOL.  Otherwise, the backupDate doesn't actually have
> anything to do with the contents of the dump.  Unfortunately, this is a
> bit hard to get at, because RestoreVolume throws it away (the type of
> the restored volume is set when it is created; it cannot be changed).
> 
>>    If backupDate is zero, then
>> perhaps the newer of creationDate and updateDate since if creationDate
>> is newer, that would indicate that the dump was created from a clone.
>> The problem with using creationDate is that it could be the result of a
>> bug and therefore be too new.
> 
> Well, if the dump was done from an RW, then clearly you should just use
> the updateDate.  This has the same problems as using backupDate from a
> BK volume, in that it is quite difficult for UV_RestoreVolume to find
> out what volume type was in the dump.  Doing that requires new
> interfaces into dumpstuff.c and a new RPC (probably the right thing is
> for a volserver transaction to record the parentId, cloneId, and type
> contained in the last volume header restored as part of that
> transaction, and provide an RPC to retrieve that).
> 
> If the dump was from an RO, all bets are off, because the creationDate
> and updateDate of that volume may both be wrong, depending on how it was
> restored, and the copyDate and backupDate may be too old to be useful.
> 
> I think the least-bad thing is for restores to restore the creationDate
> and updateDate from the dump.  If the dump was from a volume where those
> dates are wrong, due to a bug or admin intervention, then they'll be
> wrong.  But at least they won't be any more wrong than they were when
> the dump was made.
> 
> -- Jeff
> 

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to