On 12 June 2015 at 02:40, Kohei KaiGai <kai...@kaigai.gr.jp> wrote:

> 2015-06-11 23:28 GMT+09:00 Robert Haas <robertmh...@gmail.com>:
> > On Wed, Jun 10, 2015 at 10:57 PM, Kouhei Kaigai <kai...@ak.jp.nec.com>
> wrote:
> >> The attached patch replaces this palloc0() by MemoryContextAllocHuge()
> + memset().
> >> Indeed, this hash table is constructed towards the relation with
> nrows=119994544,
> >> so, it is not strange even if hash-slot itself is larger than 1GB.
> >
> > You forgot to attach the patch, I think.
> >
> Oops, I forgot to attach indeed.
>
> >  It looks to me like the size
> > of a HashJoinTuple is going to be 16 bytes, so 1GB/16 = ~64 million.
> > That's a lot of buckets, but maybe not unreasonably many if you've got
> > enough memory.
> >
> EXPLAIN says, this Hash node takes underlying SeqScan with
> 119994544 (~119 million) rows, but it is much smaller than my
> work_mem setting.
>
>
I've just run into this problem while running a TPC-H benchmark of 100GB,
on a machine with 64GB of RAM.
When attempting to run Q21 with a work_mem of 10GB I'm getting:
 ERROR:  invalid memory alloc request size 1073741824

Setting work_mem to 1GB or less gets the query running.

I've patched the code with your patch Kohei, and it's now working.

Thought I'd better post this just in case this gets forgotten about.

Thanks

David

--
 David Rowley                   http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
 PostgreSQL Development, 24x7 Support, Training & Services

Reply via email to