Robert Haas <robertmh...@gmail.com> writes: > One could test it with each pgbench thread starting at a random point > in the same sequence and wrapping at the end.
Well, the real point is that 10000 distinct statements all occurring with exactly the same frequency isn't a realistic scenario: any hashtable size less than 10000 necessarily sucks, any size >= 10000 is perfect. I'd think that what you want to test is a long-tailed frequency distribution (probably a 1/N type of law) where a small number of statements account for most of the hits and there are progressively fewer uses of less common statements. What would then be interesting is how does the performance change as the hashtable size is varied to cover more or less of that distribution. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers