On Sun, Jan 23, 2011 at 10:46 PM, Johan Corveleyn <jcor...@gmail.com> wrote:
> On Sat, Jan 8, 2011 at 6:50 AM, Stefan Fuhrmann
> <stefanfuhrm...@alice-dsl.de> wrote:
>> On 03.01.2011 02:14, Johan Corveleyn wrote:
>>> It would be interesting to see where the biggest gains
>>> are coming from (I'm guessing from the "per-machine-word"
>>> reading/comparing; I'd like to try that first, maybe together with the
>>> allocating of the file[] on the stack; I'd like to avoid
>>> special-casing file_len==2, splitting up functions in *_slow and
>>> *_fast, because it makes it a lot more difficult imho (so I'd only
>>> like to do that if it's a significant contributor)). But if you want
>>> to leave that as an exercise for the reader, that's fine as well :-).
>>
>> Exercise is certainly not a bad thing ;)
>>
>> But I think, the stack variable is certainly helpful
>> and easy to do. While "chunky" operation gives
>> a *big* boost, it is much more difficult to code if
>> you need to compare multiple sources. It should
>> be possible, though.
>
> Ok, I finished the exercise :-).
>
> - File info structures on stack instead of on heap: committed in
> r1060625. Gives 10% boost.
>
> - chunky operation for multiple sources: committed in r1062532. Gives
> ~250% boost on my machine.

Oops, as some people pointed out on IRC, that should be ~60% (I meant
2.5 times faster, going from ~50s to ~20s, so that's about ~60% less
time spent in datasources_open).

-- 
Johan

Reply via email to