On Tue, Dec 18, 2012 at 2:31 AM, Chris Mason <chris.ma...@fusionio.com> wrote:
> On Mon, Dec 17, 2012 at 06:33:24AM -0700, Alexander Block wrote:
>> I did some research on deduplication in the past and there are some
>> problems that you will face. I'll try to list some of them (for sure
>> not all).
>
> Thanks Alexander for writing all of this up.  There are a lot of great
> points here, but I'll summarize with:
>
> [ many challenges to online dedup ]
>
> [ offline dedup is the best way ]
>
> So, the big problem with offline dedup is you're suddenly read bound.  I
> don't disagree that offline makes a lot of the dedup problems easier,
> and Alexander describes a very interesting system here.
>
> I've tried to avoid features that rely on scanning though, just because
> idle disk time may not really exist.  But with scrub, we have the scan
> as a feature, and it may make a lot of sense to leverage that.
>
> online dedup has a different set of tradeoffs, but as Alexander says the
> hard part really is the data structure to index the hashes.  I think
> there are a few different options here, including changing the file
> extent pointers to point to a sha instead of a logical disk offset.
>

I am not sure if I am following the approach with replacing the
pointers, could you please explain?

I agree that the data structure is the key part in designing the feature.

> So, part of my answer really depends on where you want to go with your
> thesis.  I expect the data structure work for efficient hash lookup is
> going to be closer to what your course work requires?

No, I don't think it is really necessary that the data structure
should be close to my course work. What is your opinion on the most
suitable data structure to index the hashes?

Thanks!
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to