> Do you plan to support deduplication on a finer grained basis than file 
> level? As an example, in the end it could be interesting to deduplicate 1M 
> blocks of huge files. Backups of VM images come to my mind as a good 
> candidate. While my current backup script[1] takes care of this by using 
> "rsync --inplace" it won't consider files moved between two backup cycles. 
> This is the main purpose I'm using bedup for on my backup drive.
> 
> Maybe you could define another cutoff value to consider huge files for 
> block-level deduplication?

I'm considering deduplicating aligned blocks of large files sharing the
same size (VMs with the same baseline.  Those would ideally come
pre-cowed, but rsync or scp could have broken that).

It sounds simple, and was sort-of prompted by the new syscall taking
short ranges, but it is tricky figuring out a sane heuristic (when to
hash, when to bail, when to submit without comparing, what should be the
source in the last case), and it's not something I have an immediate
need for.  It is also possible to use 9p (with standard cow and/or
small-file dedup) and trade a bit of configuration for much more
space-efficient VMs.

Finer-grained tracking of which ranges have changed, and maybe some
caching of range hashes, would be a good first step before doing any
crazy large-file heuristics.  The hash caching would actually benefit
all use cases.

> Regards,
> Kai
> 
> [1]: https://gist.github.com/kakra/5520370


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to