On 03/11/11 09:12, Brian Fehrle wrote:
And here is a query plan.
Hash Join (cost=17516.470..26386.660 rows=27624 width=4) (actual
time=309.194..395.135 rows=12384 loops=1)
Hash Cond: (yankee.alpha = hotel_zulu.quebec)
-> Bitmap Heap Scan on yankee (cost=1066.470..8605.770 rows=2762
Thanks Tom,
And looks like I pasted an older explain plan, which is almost exactly
the same as the one with 50MB work_mem, except for the hash join
'buckets' part which used more memory and only one 'bucket' so to speak.
When running with the 50MB work_mem over 1MB work_mem, the query went
fr
Brian Fehrle writes:
> I've got a query that I need to squeeze as much speed out of as I can.
Hmm ... are you really sure this is being run with work_mem = 50MB?
The hash join is getting "batched", which means the executor thinks it's
working under a memory constraint significantly less than the
Hi all,
I've got a query that I need to squeeze as much speed out of as I can.
When I execute this query, the average time it takes is about 190 ms. I
increased my work_mem from 1 MB to 50MB and it decreased the timing down
to an average of 170 ms, but that's still not fast enough. This query