20 Jul 2022 13:15:13 Rabin Yasharzadehe <ra...@rabin.io>:

> Using ZFS with sanoid[https://github.com/jimsalterjrs/sanoid]
> 
> ZFS will give you all the benefits of COW filesystem, compression, and 
> snapshots (and much more),
> combined with sanoid utility will allow you to automate the snapshots and 
> send them to a remote system,
> and because ZFS is block-level aware of the changes between snapshots, 
> send&recive is much more efficient,
> because unlike Rsync which needs to stats and compare each file to determine 
> what to sync,
> ZFS only need to compile a list of block which have changed between 2 
> snapshots and send only them.
> which also works if the volume is encrypted, which allows you to have a 
> remote system, which is encrypted on rest,
> and keep pushing/sending snapshots to it without having to unlock it.
> 
> 
> --
> Rabin
> 
> 
> On Sun, 17 Jul 2022 at 16:50, Shlomo Solomon <shlomo.solo...@gmail.com> wrote:
>> I recently lost some files because of a bad disk - hardware problem.
>> 
>> I do regular backups so I was not really worried, but I now see that I
>> have a problem with my backup strategy so I'd like to know how others
>> handle/prevent what happened to me.
>> 
>> I backup files using rsync and I basically have 2 types of backups.
>> 
>> My most important files are backed up every night. I do incremental
>> backups using:    rsync -aqrlvtogS --ignore-errors  --backup
>> I keep about 4 months of backups. So if a file is damaged,
>> missing or accidentally deleted, I can find a good file - even if, for
>> example I screwed up the file and only discovered the problem a few
>> days later.
>> 
>> BUT, all the rest of my files - music, videos, pictures, etc are backed
>> up daily and weekly on 2 different physical drives using:
>> rsync -qrlvtogS --delete --ignore-errors
>> I use --delete to prevent accumulating garbage on my backup disks.
>> 
>> So here's the problem: Because of a hardware problem, several files on
>> one of my disks were lost. As a result, the daily backup script
>> "thought" that those files should be deleted from the daily backup.
>> Unfortunately, I did not notice the problem. A few days later, those
>> same files were also deleted from the weekly backup. So they are lost.
>> 
>> So on one hand, I need --delete to avoid keeping backups of old
>> garbage, but on the other hand, the --delete option does not know if I
>> deleted the file or if it's gone because of a hardware problem.
>> 
>> 
>> 
>> -- 
>> Shlomo Solomon
>> http://the-solomons.net
>> Claws Mail 3.17.5 - KDE Plasma 5.18.5 - Kubuntu 20.04
>> 
>> _______________________________________________
>> Linux-il mailing list
>> Linux-il@cs.huji.ac.il
>> http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Coming to this late, I use "backintime" which is a Python wrapper and GUI for 
rsync.
One of the settings, "smart remove" is set to remove old snapshots, but keep 
some for exactly the kind of problem described here, e.g.
2/day for the last week, then 1/week for a month, then 1/month for the previous 
year, etc. (you set all these yourself)

I made a similar deletion error once which propagated through my Dropbox that I 
only noticed after I was past the 30 day backup there, but I easily pulled the 
files off my backup HDD from an old image
_______________________________________________
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il

Reply via email to