On Sat, Jun 22, 2013 at 9:48 AM, Stephen Frost <sfr...@snowman.net> wrote:
>> The correct calculation that would match the objective set out in the
>> comment would be
>>
>>  dbuckets = (hash_table_bytes / tupsize) / NTUP_PER_BUCKET;
>
> This looks to be driving the size of the hash table size off of "how
> many of this size tuple can I fit into memory?" and ignoring how many
> actual rows we have to hash.  Consider a work_mem of 1GB with a small
> number of rows to actually hash- say 50.  With a tupsize of 8 bytes,
> we'd be creating a hash table sized for some 13M buckets.

This is a fair point, but I still think Simon's got a good point, too.
 Letting the number of buckets ramp up when there's ample memory seems
like a broadly sensible strategy.  We might need to put a floor on the
effective load factor, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to