Re: [HACKERS] palloc() too large on pg_buffercache with large shared_buffers

2016-09-15 Thread Robert Haas
On Wed, Sep 14, 2016 at 7:59 PM, Kouhei Kaigai  wrote:
>> On Wed, Sep 14, 2016 at 12:13 AM, Kouhei Kaigai  wrote:
>> > It looks to me pg_buffercache tries to allocate more than 1GB using
>> > palloc(), when shared_buffers is more than 256GB.
>> >
>> > # show shared_buffers ;
>> >  shared_buffers
>> > 
>> >  280GB
>> > (1 row)
>> >
>> > # SELECT buffers, d.datname, coalesce(c.relname, '???')
>> > FROM (SELECT count(*) buffers, reldatabase, relfilenode
>> > FROM pg_buffercache group by reldatabase, relfilenode) b
>> >LEFT JOIN pg_database d ON d.oid = b.reldatabase
>> >LEFT JOIN pg_class c ON d.oid = (SELECT oid FROM pg_database
>> >  WHERE datname = 
>> > current_database())
>> >AND b.relfilenode = pg_relation_filenode(c.oid)
>> >ORDER BY buffers desc;
>> > ERROR:  invalid memory alloc request size 1174405120
>> >
>> > It is a situation to use MemoryContextAllocHuge(), instead of palloc().
>> > Also, it may need a back patching?
>>
>> I guess so.  Although it's not very desirable for it to use that much
>> memory, I suppose if you have a terabyte of shared_buffers you
>> probably have 4GB of memory on top of that to show what they contain.
>>
> Exactly. I found this problem when a people asked me why shared_buffers=280GB
> is slower than shared_buffers=128MB to scan 350GB table.
> As I expected, most of shared buffers are not in-use and it also reduced
> amount of free memory; usable for page-cache.

OK.  Committed and back-patched to 9.4.  There's no support for huge
allocations before that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] palloc() too large on pg_buffercache with large shared_buffers

2016-09-14 Thread Kouhei Kaigai
> On Wed, Sep 14, 2016 at 12:13 AM, Kouhei Kaigai  wrote:
> > It looks to me pg_buffercache tries to allocate more than 1GB using
> > palloc(), when shared_buffers is more than 256GB.
> >
> > # show shared_buffers ;
> >  shared_buffers
> > 
> >  280GB
> > (1 row)
> >
> > # SELECT buffers, d.datname, coalesce(c.relname, '???')
> > FROM (SELECT count(*) buffers, reldatabase, relfilenode
> > FROM pg_buffercache group by reldatabase, relfilenode) b
> >LEFT JOIN pg_database d ON d.oid = b.reldatabase
> >LEFT JOIN pg_class c ON d.oid = (SELECT oid FROM pg_database
> >  WHERE datname = current_database())
> >AND b.relfilenode = pg_relation_filenode(c.oid)
> >ORDER BY buffers desc;
> > ERROR:  invalid memory alloc request size 1174405120
> >
> > It is a situation to use MemoryContextAllocHuge(), instead of palloc().
> > Also, it may need a back patching?
> 
> I guess so.  Although it's not very desirable for it to use that much
> memory, I suppose if you have a terabyte of shared_buffers you
> probably have 4GB of memory on top of that to show what they contain.
>
Exactly. I found this problem when a people asked me why shared_buffers=280GB
is slower than shared_buffers=128MB to scan 350GB table.
As I expected, most of shared buffers are not in-use and it also reduced
amount of free memory; usable for page-cache.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] palloc() too large on pg_buffercache with large shared_buffers

2016-09-14 Thread Robert Haas
On Wed, Sep 14, 2016 at 12:13 AM, Kouhei Kaigai  wrote:
> It looks to me pg_buffercache tries to allocate more than 1GB using
> palloc(), when shared_buffers is more than 256GB.
>
> # show shared_buffers ;
>  shared_buffers
> 
>  280GB
> (1 row)
>
> # SELECT buffers, d.datname, coalesce(c.relname, '???')
> FROM (SELECT count(*) buffers, reldatabase, relfilenode
> FROM pg_buffercache group by reldatabase, relfilenode) b
>LEFT JOIN pg_database d ON d.oid = b.reldatabase
>LEFT JOIN pg_class c ON d.oid = (SELECT oid FROM pg_database
>  WHERE datname = current_database())
>AND b.relfilenode = pg_relation_filenode(c.oid)
>ORDER BY buffers desc;
> ERROR:  invalid memory alloc request size 1174405120
>
> It is a situation to use MemoryContextAllocHuge(), instead of palloc().
> Also, it may need a back patching?

I guess so.  Although it's not very desirable for it to use that much
memory, I suppose if you have a terabyte of shared_buffers you
probably have 4GB of memory on top of that to show what they contain.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] palloc() too large on pg_buffercache with large shared_buffers

2016-09-13 Thread Kouhei Kaigai
Hello,

It looks to me pg_buffercache tries to allocate more than 1GB using
palloc(), when shared_buffers is more than 256GB.

# show shared_buffers ;
 shared_buffers

 280GB
(1 row)

# SELECT buffers, d.datname, coalesce(c.relname, '???')
FROM (SELECT count(*) buffers, reldatabase, relfilenode
FROM pg_buffercache group by reldatabase, relfilenode) b
   LEFT JOIN pg_database d ON d.oid = b.reldatabase
   LEFT JOIN pg_class c ON d.oid = (SELECT oid FROM pg_database
 WHERE datname = current_database())
   AND b.relfilenode = pg_relation_filenode(c.oid)
   ORDER BY buffers desc;
ERROR:  invalid memory alloc request size 1174405120

It is a situation to use MemoryContextAllocHuge(), instead of palloc().
Also, it may need a back patching?

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 




pgsql-fix-pg_buffercache-palloc-huge.patch
Description: pgsql-fix-pg_buffercache-palloc-huge.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers