hey everybody.

1. Deduplication
"When given a list of files it will hash their contents on a block by block basis" - are those static blocks or is the length of a block defined by its content? (that would be more resilient regarding inserts of data and the shift of the following data caused by it)

Do I understand correctly, that I better use bedup to deduplicate on a file-level before using duperomve to deduplicate on a block-level?

2. Redundany
"Background scrub process for finding and fixing errors on files with redundant copies " So fixing is only available with full redundancy?! No parity-based methods or other ECC-based approaches?!

thanks for answers
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to