Hi Craig,

thanks for your swift reply.

On 2017-12-15 05:17, Craig Barratt via BackupPC-users wrote:
Unfortunately sparse files are not supported by rsync_bpc, and there are no plans to do so.

Okay. Not a big impact for BackupPC's files, anyway - I just thought it's safe and no harm, but proven wrong...

I should make it a fatal error if that option is specified.

Yes, that would be great to avoid future mistakes.

I believe a full backup (without --sparse of course) should update the files to their correct state.

Okay. For me to understand: the MD5 hashes are generated on the server side, correct? So a file that was transferred incorrectly will not be stored under the hash of the original file? And the full backup does not just skip based on size, times and names, but on the actual hash of the file? In that situation I see why running a full backup should resolve everything.


May I ask you to crank out a short comment on point d) as well? If it's complicated, don't. But I found earlier questions on how to decompress an entire pool on the mailing list to employ ZFS' or Btrfs' compression, and while it's officially unsupported to convert the pool, I might try if (and only if) my assumptions are correct on what would need to be done.
    d) On my *actual* server, I used compression. This incident taught
    me to verify some of the files manually, and to perhaps migrate to
    filesystem compression (which I had planned anyway) to keep things
    as simple as possible.
       d.1) BackupPC_zcat for verifying/decompressing has a remarkable
    overhead for a well-grown set of small files (even when pointed
    directly to the pool files). From what I can tell, Adler's pigz [2]
    implementation supports headerless zlib files and is *way* faster.
    Also, all my tests show that files compress to output with the
    expected hashes encoded in the filename. However, I remembered that
    BackupPC's compression flushes in between, apparently much like
    pigz. Are BackupPC's compressed files *fully* in default zlib
    format, or do I need to expect trouble with large files in corner cases?
       d.2) Conceptually, what is needed to convert an entire v4 pool to
    uncompressed storage? Is it just
        - decompression of all files from cpool/??/?? to pool/??/??
    (identical names, because hashes are computed on the decompressed data)
        - move poolCnt files from cpool/?? to pool/??
        - replace compression level in pc/$host/backups and
    pc/$host/nnn/backupInfo
       or do any refCnt files need to be touched as well?


Thanks a lot,
Alex

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to