On Mon, Jan 27, 2014 at 9:33 AM, Matthieu Monrocq <
matthieu.monr...@gmail.com> wrote:

>
>
>
> On Mon, Jan 27, 2014 at 3:39 AM, Brian Anderson <bander...@mozilla.com>wrote:
>
>>
>> Consensus is that the `do` keyword is no longer pulling its weight.
>> Remove all uses of it, then remove support from the compiler. This is a 1.0
>> issue.
>>
>> # Experiment with faster hash maps (#11783)
>>
>> Rust's HashMap uses a cryptographically secure hash, and at least partly
>> as a result of that it is quite slow. HashMap continues to show up very,
>> very high in performance profiles of a variety of code. It's not clear what
>> the solution to this is, but it is clear that - at least sometimes - we
>> need a much faster hash map solution. Figure out how to create faster hash
>> maps in Rust, potentially sacrificing some amount of DOS-resistance by
>> using weaker hash functions. This is fairly open-ended and researchy, but a
>> solution to this could have a big impact on the performance of rustc and
>> other projects.
>>
>
> You might be interested by a serie of articles by Joaquín M López Muñoz
> who maintains the Boost.MultiIndex library. He did a relatively
> comprehensive overview of the hash-maps implementation of Dirkumware
> (MSVC), libstdc++ and libc++ on top of Boost.MultiIndex, and a lot of
> benchmarks showing the performance for insertion/removal/search in a
> variety of setup.
>
> One of the last articles:
> http://bannalia.blogspot.fr/2014/01/a-better-hash-table-clang.html
>

Let me also plug this blog post from a while back:
http://sebastiansylvan.com/2013/05/08/robin-hood-hashing-should-be-your-default-hash-table-implementation/.
There's also a followup on improving deletions*, which makes the final
form the fastest hash map I know of. It's also compact (95% load factor, 32
bits overhead per element, but you can reduce that to 2 bits per element if
you sacrifice some perf.), and doesn't allocate (other than doubling the
size of the table when you hit the load factor).

For a benchmark with lots of std::strings it was 23%, 66% and 25% faster
for insertions deletions and lookups (compared to MSVC unordered_map), it
also uses 30% less memory in that case.

Seb

* the basic form has an issue where repeated deletes gradually increases
the probe count. In pathological cases this can reduce performance by a
lot. The fix is to incrementally fix up the table on each delete (you could
also do it in batch every now and then). It's still faster in all cases,
and the probe-length as well as probe-length-variance remains low even in
the most pathological circumstances.
_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to