On 1/8/21 8:50 am, Frank Steinmetzger wrote:
> Am Sat, Jul 31, 2021 at 12:58:29PM +0800 schrieb William Kenworthy:
>
>> Its not raid, just a btrfs single on disk (no partition).  Contains a
>> single borgbackup repo for an offline backup of all the online
>> borgbackup repo's I have for a 3 times a day backup rota of individual
>> machines/data stores
> So you are borg’ing a repo into a repo? I am planning on simply rsync’ing
> the borg directory from one external HDD to another. Hopefully SMR can cope
> with this adequatly.
>
> And you are storing several machines into a single repo? The docs say this
> is not supported officially. But I have one repo each for /, /home and data
> for both my PC and laptop. Using a wrapper script, I create snapshots that
> are named $HOSTNAME_$DATE in each repo.

Basicly yes: I use a once per hour snapshot of approximately 500Gib of
data on moosefs, plus borgbackups 3 times a day to individual repos on
moosefs for each host.  3 times a day, the latest snapshot is stuffed
into a borg repo on moosefs and the old  snapshots are deleted.  I
currently manually push all the repos into a borg repo on the USB3 SMR
drive once a day or so.

1. rsync (and cp etc.) are dismally slow on SMR - use where you have to,
avoid otherwise.

2. borgbackup with small updates goes very quick

3. borgbackup often to keep changes between updates small - time to
backup will stay short.

4. borg'ing a repo into a repo works extreemly well - however there are
catches based around backup set names and the file change tests used.
(ping me if you want the details)

5. Yes, I have had disasters (i.e., a poorly thought out rm -rf in a
moosefs directory, unstable power that took awhile to cure, ...)
requiring underfire restoration of both large and small datasets - it works!

6. Be careful of snapshot resources on moosefs - moosefs has a defined
amount of memory for each file stored.  Even with the lazy snapshot
method, taking a snapshot will about double the memory usage on the
master for that portion of the filesystem.  Also taking too many
snapshots multiplies the effect.  Once you go into swap, it becomes a
recovery effort.  Also keep in mind that trashtime is carried into the
snapshot so the data may exist in trash even after deletion - its
actually easy to create a DOS condition by not paying attention to this.

BillK


>
>> - I get an insane amount of de-duplication that way for a slight decrease
>> in conveniance!
> And thanks to the cache, a new snapshots usually is done very fast. But for
> a yet unknown reason, sometimes Borg re-hashes all files, even though I
> didn’t touch the cache. In that case it takes 2½ hours to go through my
> video directory.
>

Reply via email to