"David Rowley" <dgrow...@gmail.com> writes: > Currently I'm unsure the best way to ensure that the hash join goes into > more than one batch apart from just making the dataset very large.
Make work_mem very small? But really there are two different performance regimes here, one where the hash data is large enough to spill to disk and one where it isn't. Reducing work_mem will cause data to spill into kernel disk cache, but if the total problem fits in RAM then very possibly that data won't ever really go to disk. So I suspect such a test case will act more like the small-data case than the big-data case. You probably actually need more data than RAM to be sure you're testing the big-data case. Regardless, I'd like to see some performance results from both regimes. It's also important to be sure there is not a penalty for single-batch cases. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers