Chris Mason wrote on 2016/03/24 16:35 -0400:
On Tue, Mar 22, 2016 at 09:35:50AM +0800, Qu Wenruo wrote:
From: Wang Xiaoguang <wangxg.f...@cn.fujitsu.com>

The basic idea is also calculate hash before compression, and add needed
members for dedupe to record compressed file extent.

Since dedupe support dedupe_bs larger than 128K, which is the up limit
of compression file extent, in that case we will skip dedupe and prefer
compression, as in that size dedupe rate is low and compression will be
more obvious.

Current implement is far from elegant. The most elegant one should split
every data processing method into its own and independent function, and
have a unified function to co-operate them.

I'd leave this one out for now, it looks like we need to refine the
pipeline from dedup -> compression and this is just more to carry around
until the initial support is in.  Can you just decline to dedup
compressed extents for now?

Yes, completely no problem.
Although this patch seems works well yet, but I also have planned to rework current run_delloc_range() to make it more flex and clear.

So the main object of the patch is more about raising attention for such further re-work.

And now it has achieved its goal.

Thanks,
Qu

-chris




--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to