On Wednesday, 24 January 2018 18:42:23 CET Patrik Janoušek wrote: > Hello, > I'd like to ask if is it better to use compression in BackupPC or in > ZFS. I don't want to use dedup, so the only criteria is CPU > time/Compression ratio. > I usually back up files like photos and docs.
If you use lz4 compression you will not recognize it with a modern CPU. What you should keep in mind is what are the possibilities to compress you data. For example if you use raidz3 with 8+3 disks with ashift=12 and recordsize=128K you will not get much. The reason is that you only have 16K for each disk to compress and you can only write multiples of 4K. Short rule for compression to work: with ashift=12 use a mirror or enable the feature large_blocks and set recordsize=1M https://docs.google.com/spreadsheets/d/1pdu_X2tR4ztF6_HLtJ-Dc4ZcwUdt6fkCjpnXxAEFlyA/edit#gid=804965548 On my backup server (5 years old when recordsize=1M was not available) I use the mirror for backuppc v3 using rsync to backup /etc, database dumps and local data of users like web browser and email config The raidz3 get used by backupafs (backuppc clone for afs) for dumps of user home directories, project data and databases (like 1GB parts of tar archives) I see: 5 x mirror: used 4.98T compressratio 1.39x compression lz4 recordsize 128K ashift 12 3 x raidz3 (8+3): used 45.6T compressratio 1.08x compression lz4 recordsize 128K ashift 12 regards Markus Köberl ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
