On 16/06/16 21:29 +0200, François Dumont wrote:
Here is a new version compiling all your feedbacks.

   PR libstdc++/71181
   * include/tr1/hashtable_policy.h
   (_Prime_rehash_policy::_M_next_bkt): Make past-the-end iterator
   dereferenceable to avoid check on lower_bound result.
   (_Prime_rehash_policy::_M_bkt_for_elements): Call latter.
   (_Prime_rehash_policy::_M_need_rehash): Likewise.
   * src/c++11/hashtable_c++0x.cc (_Prime_rehash_policy::_M_next_bkt):
   Always return a value greater than input value. Set _M_next_resize to
   max value when reaching highest prime number.
* src/shared/hashtable-aux.cc (__prime_list): Add comment about sentinel
   being now useless.
   * testsuite/23_containers/unordered_set/hash_policy/71181.cc: New.
   * testsuite/23_containers/unordered_set/hash_policy/power2_rehash.cc
   (test02): New.
* testsuite/23_containers/unordered_set/hash_policy/prime_rehash.cc: New.
   * testsuite/23_containers/unordered_set/hash_policy/rehash.cc:
   Fix indentation.

Great - OK for trunk, thanks.


On 15/06/2016 10:29, Jonathan Wakely wrote:
On 14/06/16 22:34 +0200, François Dumont wrote:
  const unsigned long* __next_bkt =
- std::lower_bound(__prime_list + 5, __prime_list + __n_primes, __n);
+      std::lower_bound(__prime_list + 6, __prime_list_end, __n);
+
+    if (*__next_bkt == __n && __next_bkt != __prime_list_end)
+      ++__next_bkt;

Can we avoid this check by searching for __n + 1 instead of __n with
the lower_bound call?

Yes, that's another option, I will give it a try.

I did some comparisons and this version seems to execute fewer
instructions in some simple tests, according to cachegrind.

The only drawback is that calling _M_next_bkt(size_t(-1)) doesn't give the right result. But reaching this kind of value is not likely to happen in real use cases so it is acceptable.

Yes, good point. I don't think we can ever have that many buckets,
because each bucket requires more than one byte, so we can't fit that
many buckets in memory anyway.

Reply via email to