> -----Original Message-----
> From: David Rowley [mailto:david.row...@2ndquadrant.com]
> Sent: Wednesday, August 19, 2015 9:00 AM
> To: Kevin Grittner
> Cc: Kaigai Kouhei(海外 浩平); pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Bug? ExecChooseHashTableSize() got assertion failed 
> with
> crazy number of rows
> 
> On 19 August 2015 at 08:54, Kevin Grittner <kgri...@ymail.com> wrote:
> 
> 
>       Kouhei Kaigai <kai...@ak.jp.nec.com> wrote:
> 
>       >         long        lbuckets;
> 
>       >         lbuckets = 1 << my_log2(hash_table_bytes / bucket_size);
> 
>       >     Assert(nbuckets > 0);
> 
>       > In my case, the hash_table_bytes was 101017630802, and bucket_size was
> 48.
>       > So, my_log2(hash_table_bytes / bucket_size) = 31, then lbuckets will
> have
>       > negative number because both "1" and my_log2() is int32.
>       > So, Min(lbuckets, max_pointers) chooses 0x80000000, then it was set
> on
>       > the nbuckets and triggers the Assert().
> 
>       > Attached patch fixes the problem.
> 
>       So you changed the literal of 1 to 1U, but doesn't that just double
>       the threshold for failure?  Wouldn't 1L (to match the definition of
>       lbuckets) be better?
> 
> 
> 
> 
> I agree, but I can only imagine this is happening because the maximum setting
> of work_mem has been modified with the code you're running.
> 
> hash_tables_bytes is set based on work_mem
> 
> hash_table_bytes = work_mem * 1024L;
> 
> The size of your hash table is 101017630802 bytes, which is:
> 
> david=# select pg_size_pretty(101017630802);
> 
>  pg_size_pretty
> ----------------
>  94 GB
> (1 row)
> 
> david=# set work_mem = '94GB';
> ERROR:  98566144 is outside the valid range for parameter "work_mem" (64 ..
> 2097151)
>
Hmm. Why I could set work_mem = '96GB' without error.

It was described in the postgresql.conf.

  postgres=# SHOW work_mem;
   work_mem
  ----------
   96GB
  (1 row)

> So I think the only way the following could cause an error, is if bucket_size
> was 1, which it can't be.
> 
> lbuckets = 1 << my_log2(hash_table_bytes / bucket_size);
> 
> 
> I think one day soon we'll need to allow larger work_mem sizes, but I think 
> there's
> lots more to do than this change.
>
I oversight this limitation, but why I can bypass GUC limitation check?

--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei <kai...@ak.jp.nec.com>
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to