On 19 August 2015 at 12:23, Kouhei Kaigai <kai...@ak.jp.nec.com> wrote:

> > -----Original Message-----
> > From: David Rowley [mailto:david.row...@2ndquadrant.com]
> > Sent: Wednesday, August 19, 2015 9:00 AM
> > The size of your hash table is 101017630802 bytes, which is:
> >
> > david=# select pg_size_pretty(101017630802);
> >
> >  pg_size_pretty
> > ----------------
> >  94 GB
> > (1 row)
> >
> > david=# set work_mem = '94GB';
> > ERROR:  98566144 is outside the valid range for parameter "work_mem" (64
> ..
> > 2097151)
> >
> Hmm. Why I could set work_mem = '96GB' without error.
>
> It was described in the postgresql.conf.
>
>   postgres=# SHOW work_mem;
>    work_mem
>   ----------
>    96GB
>   (1 row)
>
> > So I think the only way the following could cause an error, is if
> bucket_size
> > was 1, which it can't be.
> >
> > lbuckets = 1 << my_log2(hash_table_bytes / bucket_size);
> >
> >
> > I think one day soon we'll need to allow larger work_mem sizes, but I
> think there's
> > lots more to do than this change.
> >
> I oversight this limitation, but why I can bypass GUC limitation check?


I'm unable to get the server to start if I set work_mem that big. I also
tried starting the start with 1GB work_mem then doing a pg_ctl reload. With
each of these I get the same error message that I would have gotten if I
had done; set work_mem = '96GB';

Which version are you running?

Are you sure there's not changes in guc.c for work_mem's range?

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
 PostgreSQL Development, 24x7 Support, Training & Services

Reply via email to