Hi,

I am about to take the plunge with BackupPC, and would appreciate input on the following three points regarding using btrfs for the data (bit rot protection is key for me).


From what I've read, btrfs (like many file systems) suffers over time when fragmentation increases. I have seen suggestions such as not to put data bases on btrfs because of this, but that just seems silly at a number of levels. At the least one should take special care with data bases (know your data and adjust your maintenance accordingly), but that holds for any data base on any FS. My question is how best to approach this with a combination of rebalancing and scrubbing, or if there is another way or other aspects to keep in mind.

I won't be using snapshots in btrfs since BackupPC effectively implements its own. When I read up how btrfs implements COW I thought it would be safest to use nodatacow, but then read that doing so would also stop bit rot protection, so that's a real bummer. Am I missing something, or do I have that right?

Given how I understand BackupPC implements compression, I'd rather have btrfs handle de/compression, as that would seem to involve less time doing redundant calculations. Does that make sense?


Thanks in advance for constructive input, as I have seen some flame wars around the use of btrfs.

8-)
John
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

Reply via email to