Andres Freund <and...@anarazel.de> writes: > On 2018-08-28 14:45:49 +1200, Thomas Munro wrote: >> Yeah, it would be a terrible idea as a general hash function for use >> in contexts where the "avalanche effect" assumption is made about >> information being spread out over the bits (the HJ batching code >> wouldn't work for example). I was wondering specifically about the >> limited case of hash tables that are used to look things up in caches.
> I don't understand why that'd be ok there? With a simple 1:1 hash > function, which you seem to advocate, many hash-tables would be much > fuller in the 1000-3000 (where pg_class, etc all reside) than in any > other range. A lot of our tables are populated on-demand, so you'd > often end up with most of the data in one or two buckets, and a larger > number largely empty. Hmm ... but what we actually care about, for dynahash tables, is just the low order bits of the hash; the original range of the values isn't interesting. Some desultory experimentation with queries like select oid::int % 1024, count(*) from pg_proc group by 1 order by 2 desc; select (hashoid(oid)+2^32)::int8 % 1024, count(*) from pg_proc group by 1 order by 2 desc; offers some support for Thomas' idea: the hashoid results are certainly not better distributed than the raw OIDs, and in some cases noticeably worse. I'd supposed that we needed to get the OIDs' high-order bits to participate in the hash in order to get decent results, but looking at these numbers makes me wonder. OTOH, I'm also not convinced it's worth messing with. In principle we could shave some cycles with a no-op hash function, but I've not seen anything to make me think that hashoid() is a measurable bottleneck. regards, tom lane