On 20 May 2014 06:07, Konstantinos Skarlatos <k.skarla...@gmail.com> wrote:
> On 19/5/2014 8:38 μμ, Mark Fasheh wrote:
>>
>
>
> Well, after having good results with duperemove with a few gigs of data, i
> tried it on a 500gb subvolume. After it scanned all files, it is stuck at
> 100% of one cpu core for about 5 hours, and still hasn't done any deduping.
> My cpu is an Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz, so i guess thats not
> the problem. So I guess the speed of duperemove drops dramatically as data
> volume increases.
>
>>
>> There's a TODO list which gives a decent idea of what's on my mind for
>> possible future improvements. I think what I'm most wanting to do right
>> now
>> is some sort of (optional) writeout to a file of what was done during a
>> run.
>> The idea is that you could feed that data back to duperemove to improve
>> the
>> speed of subsequent runs. My priorities may change depending on feedback
>> from users of course.
>>
>> I also at some point want to rewrite some of the duplicate extent finding
>> code as it got messy and could be a bit faster.
>>         --Mark

I'm glad about this discussion.

While I am no where near an expert on file systems, my knowledge has
increased a lot through BtrFS.

ZFS uses RAM to store its checksum tables. Opendedup recommends a
separate HDD. Opendedup uses 4k block sizes. Both are always on.

I'm not against using a separate HDD to store csums. Cheaper than RAM,
albeit slower.

The part of duperemove I like is the ability to CHOOSE when and how I
want to dedupe.

Scott
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to