Emilio G. Cota <c...@braap.org> writes: > xxhash is a fast, high-quality hashing function. The appended > brings in the 32-bit version of it, with the small modification that > it assumes the data to be hashed is made of 32-bit chunks; this increases > speed slightly for the use-case we care about, i.e. tb-hash. > > The original algorithm, as well as a 64-bit implementation, can be found at: > https://github.com/Cyan4973/xxHash > > Signed-off-by: Emilio G. Cota <c...@braap.org> > --- > include/qemu/xxhash.h | 106 > ++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 106 insertions(+) > create mode 100644 include/qemu/xxhash.h > > diff --git a/include/qemu/xxhash.h b/include/qemu/xxhash.h > new file mode 100644 > index 0000000..a13a665 > --- /dev/null > +++ b/include/qemu/xxhash.h > @@ -0,0 +1,106 @@ <snip> > + > +/* u32 hash of @n contiguous chunks of u32's */ > +static inline uint32_t qemu_xxh32(const uint32_t *p, size_t n, uint32_t seed) > +{
What is the point of seed here? I looked on the original site to see if there was any guidance on tuning seed but couldn't find anything. I appreciate the compiler can inline the constant away but perhaps we should #define it and drop the parameter if we are not intending to modify it? Also it might be helpful to wrap the call to avoid getting the boilerplate sizing wrong: #define qemu_xxh32(s) qemu_xxh32_impl((const uint32_t *)s, sizeof(*s)/sizeof(uint32_t), 1) Then calls become a little simpler for the user: return qemu_xxh32(&k); Do we need to include a compile time check for structures that don't neatly divide into uint32_t chunks? > + const uint32_t *end = p + n; > + uint32_t h32; > + > + if (n >= 4) { > + const uint32_t * const limit = end - 4; > + uint32_t v1 = seed + PRIME32_1 + PRIME32_2; > + uint32_t v2 = seed + PRIME32_2; > + uint32_t v3 = seed + 0; > + uint32_t v4 = seed - PRIME32_1; > + > + do { > + v1 += *p * PRIME32_2; > + v1 = XXH_rotl32(v1, 13); > + v1 *= PRIME32_1; > + p++; > + v2 += *p * PRIME32_2; > + v2 = XXH_rotl32(v2, 13); > + v2 *= PRIME32_1; > + p++; > + v3 += *p * PRIME32_2; > + v3 = XXH_rotl32(v3, 13); > + v3 *= PRIME32_1; > + p++; > + v4 += *p * PRIME32_2; > + v4 = XXH_rotl32(v4, 13); > + v4 *= PRIME32_1; > + p++; > + } while (p <= limit); > + h32 = XXH_rotl32(v1, 1) + XXH_rotl32(v2, 7) + XXH_rotl32(v3, 12) + > + XXH_rotl32(v4, 18); > + } else { > + h32 = seed + PRIME32_5; > + } I don't plead any particular knowledge of hashing codes but I note the test cases we add only exercise the n == 1 path but in actual usage we exercise n = 5 (at least for arm32, I guess aarch64 would be more). > + > + h32 += n * sizeof(uint32_t); > + > + while (p < end) { > + h32 += *p * PRIME32_3; > + h32 = XXH_rotl32(h32, 17) * PRIME32_4 ; > + p++; > + } > + > + h32 ^= h32 >> 15; > + h32 *= PRIME32_2; > + h32 ^= h32 >> 13; > + h32 *= PRIME32_3; > + h32 ^= h32 >> 16; > + > + return h32; > +} > + > +#endif /* QEMU_XXHASH_H */ -- Alex Bennée