On 19 August 2015 at 12:55, Kohei KaiGai <kai...@kaigai.gr.jp> wrote:

> 2015-08-19 20:12 GMT+09:00 Simon Riggs <si...@2ndquadrant.com>:
> > On 12 June 2015 at 00:29, Tomas Vondra <tomas.von...@2ndquadrant.com>
> wrote:
> >
> >>
> >> I see two ways to fix this:
> >>
> >> (1) enforce the 1GB limit (probably better for back-patching, if that's
> >>     necessary)
> >>
> >> (2) make it work with hash tables over 1GB
> >>
> >> I'm in favor of (2) if there's a good way to do that. It seems a bit
> >> stupid not to be able to use fast hash table because there's some
> artificial
> >> limit. Are there any fundamental reasons not to use the
> >> MemoryContextAllocHuge fix, proposed by KaiGai-san?
> >
> >
> > If there are no objections, I will apply the patch for 2) to HEAD and
> > backpatch to 9.5.
> >
> Please don't be rush. :-)
>

Please explain what rush you see?


> It is not difficult to replace palloc() by palloc_huge(), however, it may
> lead
> another problem once planner gives us a crazy estimation.
> Below is my comment on the another thread.
>

 Yes, I can read both threads and would apply patches for each problem.

-- 
Simon Riggs                http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to