> Defrag works on individual files, and tries to find a contiguous > sequence of bytes to write the file's data to. In the process, the > current implementation will break any CoW duplication -- either within > single files (copies with cp --reflink=always) or files copied via > snapshotting.
While it's obviously impossible to defrag and maintain deduplicated partial chunks simultaneously without some kind of compromise (the ideal form of which is just beyond the reach of this list participant's obviousness flashlight), how difficult is it going to be to modify the current defrag implementation to hip the additional references to the file to the fact that a contiguous version (of an individual file) is now available and it's time to update their links too, after each file with multiple reflinks/snapshots to its current form has been copied to contiguous space? Could Bernhard Duebi do it? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html