> -Original Message-
> From: pgsql-hackers-ow...@postgresql.org [mailto:pgsql-hackers-
> ow...@postgresql.org] On Behalf Of Tom Lane
> But really there are two different performance regimes here, one where
> the hash data is large enough to spill to disk and one where it isn't.
> Reducing w
> The idea I came up with for benchmarking was a little similar to what
I
> remember from the original tests. I have a sales orders table and a
> products
> table. My version of the sales orders table contains a customer
column.
> Data
> for 10 customers is populated into the sales orders table, cu
"David Rowley" writes:
> Currently I'm unsure the best way to ensure that the hash join goes into
> more than one batch apart from just making the dataset very large.
Make work_mem very small?
But really there are two different performance regimes here, one where
the hash data is large enough to
I've been putting a little bit of thought into how to go about testing the
performance of this patch. From reading the previous threads quite a bit of
testing was done with a certain data set where all that tested found it to
be a big winner with staggering performance gains with the skewed datase