On 04/23/2016 07:22 PM, Graham Percival wrote:
Hmm.  I suspect there's two different ideas about snapshots and backups here.
I'm not an expert in this field (actually, I've never used snapshots at all!),
but it sounds like you might be doing this:

[best viewed in a fixed-width font]

filesystem RW -+--> filesystem RO ---> run tarsnap -+-> filesystem RW
               |--> snapshot RW --> normal usage ---^ (merge) -> delete

What I'm suggesting is this:

filesystem RW -+--> (RW, run SQL, emails, etc) ---> filesystem RW
               |--> snapshot RO --> run tarsnap --> delete snapshot


Essentially, you would not be making your *home directory* read-only.
Instead, you could create a read-only copy of your home directory (which
continues to be read-write), then you archive that *copy*.

Please let me know if I've misunderstood your current approach.  If my guess
was accurate, then I'll make a non-ascii version of the above diagrams and
slap it somewhere on the tarsnap website to help other people.  :)


The first diagram describes, as best I've understood the documentation, how LVM implements snapshots. I believe I have no filesystem that implements the second but it looks fine with one proviso:

The first allows the creation of an immediate snapshot, so that the background processes can continue to run uninterrupted. No time passes other than the clearing of some pointers to the scratch-space which is to take blocks of writes for the next while. The temporary scratch-space can be far smaller than the full dataset, just large enough to take all the blocks or new data before it can be merged back in after the backup is completed

But the second, it seems to me, requires a partition (or filesystem, or tree portion) to be copied to a large enough temporary space to hold the full dataset which is to be archived, before the next update can be accepted and the backup started side-by-side. If you think it can be achieved without a full data copy being taken while the updates are frozen then this will be a lightbulb-moment for me - continuing to update while creating the copy is no different to updating while backing up, it's just pushing the problem to one remove. I can see how it might be achieved instantly with a mirror dataset, freezing one and continuing to update the other, but we've not discussed those.

John.

Reply via email to