I tend to agree. It sounds like an image library archive project, not a
backup project.

Robert Trevellyan


On Thu, Feb 17, 2022 at 10:36 AM Greg Harris <ghar...@teamexpansion.org>
wrote:

> Yet another sideline sitter here.  However, here goes with a very
> questionable thought.  Maybe BackupPC is the wrong tool for this particular
> directory/instance?  Perhaps something like Amazon Glacier with Cryptomator
> is a wiser choice in this one scenario?
>
> Thanks,
>
> Greg Harris
>
> On Feb 17, 2022, at 10:27 AM, backu...@kosowsky.org wrote:
>
> G.W. Haywood via BackupPC-users wrote at about 13:24:26 +0000 on Thursday,
> February 17, 2022:
>
> Hi there,
>
> On Thu, 17 Feb 2022, brogeriofernandes wrote:
>
> I'm wondering if would be possible to run a command just after
> client transfers file data but before it's stored in backuppc
> pool. My idea is to do an image compression, like jpeg-xl lossless,
> instead of the standard zlib one.
>
>
> Have you considered using a compressing filesystem on the server?
>
>
> I think that is the best idea as:
> 1. It is transparent to BackupPC
> 2. You benefit from all the optimizations of the underlying file
>   system
> 3. No new coding is needed
> 4. No need to create special compression cases for different file
>   types
> 5. Compression is automagically multi-threaded and cached/backgrounded
>   so as to minimally slow down the program (I never "feel" the
>   overhead of compression on my btrfs/zsd file system).
> 5. It's essentially totally reliable
>
> It's particularly easy for a filesystem like btrfs... where you can
> use 'zstd' which is both fast and good compression.
>
> I would compare the speed and compression ratio of btrfs with 'zstd'
> with the speed and compression ratio of your raw lossless jpg
> compression.
>
>
> ... more bandwidth-friendly ... compression before transferring to
> server ...
>
>
> The data can be compressed on the client by the transfer tools during
> the transfer.  This can be purely to reduce network load and it can be
> independent of any compression (perhaps by a different method) of the
> data when it is stored by the server.  The compression algorithms for
> transfer and storage can be chosen for different reasons.  Of course
> if it is required to perform multiple compression and/or decompression
> steps for each file, the server will have to handle an increased load.
>
> This can all be more or less transparent to BackupPC.
>
> --
>
> 73,
> Ged.
>
>
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
>
>
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
>
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

Reply via email to