Re: [GENERAL] question about memory usage

2014-07-23 Thread klo uo
Tomas, thanks for the heads up!

I certainly didn't know what this setting means, except the obvious name.
Your links helped.
I just can't find where was this setting suggested, but IIRC it was in a
guide for migrating OSM to PostGIS, as other tables were just created by
GDAL OGR.

I had this line in my `postgresql.conf`:

max_locks_per_transaction = 5# 1

that's why I thought that 1 is the default, but it may be that
commented value was entered by me, and not the real default value.

I've set it now to 64.

Thanks again




On Wed, Jul 23, 2014 at 4:10 PM, Tomas Vondra  wrote:

> On 23 Červenec 2014, 15:56, klo uo wrote:
> > Bill, thanks for your reply.
> >
> > "shared_buffers" is set to "128MB".
> >
> > Now that you mention config file, the only thing I did change there, and
> > was suggested to me while I made some on my databases was
> > "max_locks_per_transaction = 5" (which has default value 1).
> >
> > After resetting "max_locks_per_transaction" to default value and
> > restarting
> > the server, memory occupied in working set reduced linearly to around 200
> > MB.
> >
> > I guess this is it.
>
> The default value for max_locks_per_transaction is 64, not 1. Values
> this high are quite insane, and suggest that either you don't know what
> the value means (and increased it just in case, because "more is always
> better") or that the application does something wrong (eventually
> requiring so many locks).
>
> You really need to check this (notice how the amount of shared memory
> depends on max_locks_per_transaction):
>
>
> http://www.postgresql.org/docs/9.0/static/kernel-resources.html#SHARED-MEMORY-PARAMETERS
>
> and this (which explains what max_locks_per_transaction does):
>
>
> http://www.databasesoup.com/2012/06/postgresqlconf-maxlockspertransaction.html
>
> regards
> Tomas
>
>


Re: [GENERAL] question about memory usage

2014-07-23 Thread Tomas Vondra
On 23 Červenec 2014, 15:56, klo uo wrote:
> Bill, thanks for your reply.
>
> "shared_buffers" is set to "128MB".
>
> Now that you mention config file, the only thing I did change there, and
> was suggested to me while I made some on my databases was
> "max_locks_per_transaction = 5" (which has default value 1).
>
> After resetting "max_locks_per_transaction" to default value and
> restarting
> the server, memory occupied in working set reduced linearly to around 200
> MB.
>
> I guess this is it.

The default value for max_locks_per_transaction is 64, not 1. Values
this high are quite insane, and suggest that either you don't know what
the value means (and increased it just in case, because "more is always
better") or that the application does something wrong (eventually
requiring so many locks).

You really need to check this (notice how the amount of shared memory
depends on max_locks_per_transaction):

http://www.postgresql.org/docs/9.0/static/kernel-resources.html#SHARED-MEMORY-PARAMETERS

and this (which explains what max_locks_per_transaction does):

http://www.databasesoup.com/2012/06/postgresqlconf-maxlockspertransaction.html

regards
Tomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about memory usage

2014-07-23 Thread klo uo
Bill, thanks for your reply.

"shared_buffers" is set to "128MB".

Now that you mention config file, the only thing I did change there, and
was suggested to me while I made some on my databases was
"max_locks_per_transaction = 5" (which has default value 1).

After resetting "max_locks_per_transaction" to default value and restarting
the server, memory occupied in working set reduced linearly to around 200
MB.

I guess this is it.


On Wed, Jul 23, 2014 at 5:53 AM, Bill Moran 
wrote:

>
> I'm not an expert on the Windows version, so I could be off-base, but the
> POSIX versions of Postgres allocate shared_buffers worth of memory at
> startup
> and lock it for exclusive use by Postgres.  Do you have shared_buffers set
> to
> around 1G, perhaps?
>
>


Re: [GENERAL] question about memory usage

2014-07-22 Thread Bill Moran
On Wed, 23 Jul 2014 00:16:47 +0200
klo uo  wrote:

> 
> Looking in process explorer, I see unusual size for postgres server
> process, i.e. working set reported around 1GB:
> http://i.imgur.com/HmkvFLM.png (same in attachment)
> 
> I also use SqlExpress server with several databases (including spatial) but
> that doesn't go above 100MB in Private Bytes and around 1 MB in Working Set.
> 
> I wanted to ask, if this is normal, or there is some problem with my server
> setting?

I'm not an expert on the Windows version, so I could be off-base, but the
POSIX versions of Postgres allocate shared_buffers worth of memory at startup
and lock it for exclusive use by Postgres.  Do you have shared_buffers set to
around 1G, perhaps?

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about memory usage

2014-01-10 Thread Preston Hagar
On Fri, Jan 10, 2014 at 12:19 PM, Tom Lane  wrote:
>
>> Preston Hagar  writes:
>> >>> tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors
>> >>> despite the server now having 32 GB instead of 4 GB of RAM and the
>> workload
>> >>> and number of clients remaining the same.
>>
>> > Here are a couple of examples from the incident we had this morning:
>> > 2014-01-10 06:14:40 CST  30176LOG:  could not fork new process for
>> > connection: Cannot allocate memory
>> > 2014-01-10 06:14:40 CST  30176LOG:  could not fork new process for
>> > connection: Cannot allocate memory
>>
>> That's odd ... ENOMEM from fork() suggests that you're under system-wide
>> memory pressure.
>>
>> > [ memory map dump showing no remarkable use of memory at all ]
>> > 2014-01-10 06:18:46 CST 10.1.1.6 16669 [unknown] production
>> >  10.1.1.6(36680)ERROR:  out of memory
>> > 2014-01-10 06:18:46 CST 10.1.1.6 16669 [unknown] production
>> >  10.1.1.6(36680)DETAIL:  Failed on request of size 500.
>>
>> I think that what you've got here isn't really a Postgres issue, but
>> a system-level configuration issue: the kernel is being unreasonably
>> stingy about giving out memory, and it's not clear why.
>>
>> It might be worth double-checking that the postmaster is not being
>> started under restrictive ulimit settings; though offhand I don't
>> see how that theory could account for fork-time failures, since
>> the ulimit memory limits are per-process.
>>
>> Other than that, you need to burrow around in the kernel settings
>> and see if you can find something there that's limiting how much
>> memory it will give to Postgres.  It might also be worth watching
>> the kernel log when one of these problems starts.  Plain old "top"
>> might also be informative as to how much memory is being used.
>>
>
   Thanks for the response.  I think it might have been the lack of a
swapfile (I replied as such in another response)


> That said, we have been using this site as a guide to try to figure things
> out about postgres and memory:
>
> http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/
>
> we came up with the following for all our current processes (we aren't out
> of memory and new connections are being accepted right now, but memory
> seems low)
>
> 1. List of RSS usage for all postgres processes:
>
> http://pastebin.com/J7vy846k
>
> 2. List of all memory segments for postgres checkpoint process (pid 30178)
>
> grep -B1 -E '^Size: *[0-9]{6}' /proc/30178/smaps
> 7f208acec000-7f2277328000 rw-s  00:04 31371473
> /dev/zero (deleted)
> Size:8067312 kB
>
> 3. Info on largest memory allocation for postgres checkpoint process. It
> is using 5GB of RAM privately.
>
> cat /proc/30178/smaps | grep 7f208acec000 -B 0 -A 20
>
> Total RSS: 11481148
> 7f208acec000-7f2277328000 rw-s  00:04 31371473
> /dev/zero (deleted)
> Size:8067312 kB
> Rss: 5565828 kB
> Pss: 5284432 kB
> Shared_Clean:  0 kB
> Shared_Dirty: 428840 kB
> Private_Clean: 0 kB
> Private_Dirty:   5136988 kB
> Referenced:  5559624 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> Locked:0 kB
> 7f2277328000-7f22775f1000 r--p  09:00 2889301
>  /usr/lib/locale/locale-archive
> Size:   2852 kB
> Rss:   8 kB
> Pss:   0 kB
> Shared_Clean:  8 kB
> Shared_Dirty:  0 kB
>
> If I am understanding all this correctly, the postgres checkpoint process
> has around 5GB of RAM "Private_Dirty" allocated (not shared buffers).  Is
> this normal?  Any thoughts as to why this would get so high?
>
> I'm still trying to dig in further to figure out exactly.  We are running
> on Ubuntu 12.04.3 (Kernel 3.5.0-44).  We set vm.overcommit_memory = 2 but
> didn't have a swap partition we have since added one and are seeing if that
> helps.
>
>
>
>>
>>
>  >> We had originally copied our shared_buffers, work_mem, wal_buffers and
>> >> other similar settings from our old config, but after getting the
>> memory
>> >> errors have tweaked them to the following:
>> >
>> > shared_buffers= 7680MB
>> > temp_buffers  = 12MB
>> > max_prepared_transactions = 0
>> > work_mem  = 80MB
>> > maintenance_work_mem  = 1GB
>> > wal_buffers = 8MB
>> > max_connections = 350
>>
>> That seems like a dangerously large work_mem for so many connections;
>> but unless all the connections were executing complex queries, which
>> doesn't sound to be the case, that isn't the immediate problem.
>>
>>
> Thanks for the heads up.  We had come about the value originally using
> pgtune and I think 250 connections and I forgot to lower work_mem when I
> upped the connections.  I now have it set to 45 MB, does that seem more
> reasonable?
>
>
>
>
>> >> The weird thing is that our old server had 1/8th the RAM, was set to

Re: [GENERAL] Question about memory usage

2014-01-10 Thread Tomas Vondra
On 10 Leden 2014, 19:19, Tom Lane wrote:
> Preston Hagar  writes:
 tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors
 despite the server now having 32 GB instead of 4 GB of RAM and the
 workload
 and number of clients remaining the same.
>
>> Here are a couple of examples from the incident we had this morning:
>> 2014-01-10 06:14:40 CST  30176LOG:  could not fork new process for
>> connection: Cannot allocate memory
>> 2014-01-10 06:14:40 CST  30176LOG:  could not fork new process for
>> connection: Cannot allocate memory
>
> That's odd ... ENOMEM from fork() suggests that you're under system-wide
> memory pressure.
>
>> [ memory map dump showing no remarkable use of memory at all ]
>> 2014-01-10 06:18:46 CST 10.1.1.6 16669 [unknown] production
>>  10.1.1.6(36680)ERROR:  out of memory
>> 2014-01-10 06:18:46 CST 10.1.1.6 16669 [unknown] production
>>  10.1.1.6(36680)DETAIL:  Failed on request of size 500.
>
> I think that what you've got here isn't really a Postgres issue, but
> a system-level configuration issue: the kernel is being unreasonably
> stingy about giving out memory, and it's not clear why.
>
> It might be worth double-checking that the postmaster is not being
> started under restrictive ulimit settings; though offhand I don't
> see how that theory could account for fork-time failures, since
> the ulimit memory limits are per-process.
>
> Other than that, you need to burrow around in the kernel settings
> and see if you can find something there that's limiting how much
> memory it will give to Postgres.  It might also be worth watching
> the kernel log when one of these problems starts.  Plain old "top"
> might also be informative as to how much memory is being used.

My bet is on overcommit - what are

vm.overcommit_memory
vm.overcommit_ratio

set to? Do you have a swap or no? I've repeatedly ran into very similar
OOM issues on machines with overcommit disabled (overcommit_memory=2) and
with no swap. There was plenty of RAM available (either free or in page
cache) but in case of sudden peak the allocations failed. Also
vm.swappiness seems to play a role in this.

>>> The weird thing is that our old server had 1/8th the RAM, was set to
>>> max_connections = 600 and had the same clients connecting in the same
>>> way
>>> to the same databases and we never saw any errors like this in the
>>> several
>>> years we have been using it.

Chances are the old machine had swap, overcommit and/or higher swappiness,
so it was not running into these issues with overcommit.

Anyway, I see you've mentioned shmmax/shmall in one of your previous
messages. I'm pretty sure that's irrelevant to the problem, because that
only affects allocation of shared buffers (i.e. shared memory). But if the
database starts OK, the cause is somewhere else.

kind regards
Tomas Vondra



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about memory usage

2014-01-10 Thread Tom Lane
Preston Hagar  writes:
>>> tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors
>>> despite the server now having 32 GB instead of 4 GB of RAM and the workload
>>> and number of clients remaining the same.

> Here are a couple of examples from the incident we had this morning:
> 2014-01-10 06:14:40 CST  30176LOG:  could not fork new process for
> connection: Cannot allocate memory
> 2014-01-10 06:14:40 CST  30176LOG:  could not fork new process for
> connection: Cannot allocate memory

That's odd ... ENOMEM from fork() suggests that you're under system-wide
memory pressure.

> [ memory map dump showing no remarkable use of memory at all ]
> 2014-01-10 06:18:46 CST 10.1.1.6 16669 [unknown] production
>  10.1.1.6(36680)ERROR:  out of memory
> 2014-01-10 06:18:46 CST 10.1.1.6 16669 [unknown] production
>  10.1.1.6(36680)DETAIL:  Failed on request of size 500.

I think that what you've got here isn't really a Postgres issue, but
a system-level configuration issue: the kernel is being unreasonably
stingy about giving out memory, and it's not clear why.

It might be worth double-checking that the postmaster is not being
started under restrictive ulimit settings; though offhand I don't
see how that theory could account for fork-time failures, since
the ulimit memory limits are per-process.

Other than that, you need to burrow around in the kernel settings
and see if you can find something there that's limiting how much
memory it will give to Postgres.  It might also be worth watching
the kernel log when one of these problems starts.  Plain old "top"
might also be informative as to how much memory is being used.

>> We had originally copied our shared_buffers, work_mem, wal_buffers and
>> other similar settings from our old config, but after getting the memory
>> errors have tweaked them to the following:
> 
> shared_buffers= 7680MB
> temp_buffers  = 12MB
> max_prepared_transactions = 0
> work_mem  = 80MB
> maintenance_work_mem  = 1GB
> wal_buffers = 8MB
> max_connections = 350

That seems like a dangerously large work_mem for so many connections;
but unless all the connections were executing complex queries, which
doesn't sound to be the case, that isn't the immediate problem.

>> The weird thing is that our old server had 1/8th the RAM, was set to
>> max_connections = 600 and had the same clients connecting in the same way
>> to the same databases and we never saw any errors like this in the several
>> years we have been using it.

This reinforces the impression that something's misconfigured at the
kernel level on the new server.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about memory usage

2014-01-10 Thread Preston Hagar
On Fri, Jan 10, 2014 at 11:09 AM, Steve Atkins  wrote:

>
> On Jan 10, 2014, at 8:35 AM, Preston Hagar  wrote:
>
> > tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors
> despite the server now having 32 GB instead of 4 GB of RAM and the workload
> and number of clients remaining the same.
> >
> >
> > Details:
> >
> > We have been using Postgresql for some time internally with much
> success.  Recently, we completed a migration off of an older server running
> 8.3 to a new server running 9.3.  The older server had 4GB of RAM, the new
> server has 32 GB.
> >
> > For some reason, since migrating we are getting lots of "out of memory"
> and "cannot allocate memory" errors on the new server when the server gets
> under a decent load.  We have upped shmmax to 17179869184 and shmall to
> 4194304.
>
> What are the exact error messages you’re getting, and where are you seeing
> them?
>
>
Thanks for the reply.  I'm seeing them in the main postgresql log
(/var/log/postgresql/postgresql-9.3-main.log)

Here are a couple of examples from the incident we had this morning:

2014-01-10 06:14:40 CST  30176LOG:  could not fork new process for
connection: Cannot allocate memory
2014-01-10 06:14:40 CST  30176LOG:  could not fork new process for
connection: Cannot allocate memory


TopMemoryContext: 160408 total in 19 blocks; 12984 free (41 chunks); 147424
used
  TopTransactionContext: 8192 total in 1 blocks; 7392 free (0 chunks); 800
used
  Btree proof lookup cache: 8192 total in 1 blocks; 1680 free (0 chunks);
6512 used
  TableSpace cache: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 used
  Type information cache: 24240 total in 2 blocks; 3744 free (0 chunks);
20496 used
  Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
  MessageContext: 524288 total in 7 blocks; 225240 free (3 chunks); 299048
used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
  smgr relation table: 57344 total in 3 blocks; 34320 free (10 chunks);
23024 used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
PortalHeapMemory: 1024 total in 1 blocks; 768 free (0 chunks); 256 used
  ExecutorState: 329080 total in 8 blocks; 105944 free (4 chunks);
223136 used
TupleSort: 32816 total in 2 blocks; 176 free (2 chunks); 32640 used
printtup: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8128 free (3 chunks); 64 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 7896 free (0 chunks); 296 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  Relcache by OID: 24576 total in 2 blocks; 11792 free (3 chunks); 12784
used
  CacheMemoryContext: 9815680 total in 25 blocks; 8143416 free (1 chunks);
1672264 used
pg_toast_3598032_index: 1024 total in 1 blocks; 16 free (0 chunks);
1008 used
mls_office_uid_idx: 1024 total in 1 blocks; 152 free (0 chunks); 872
used
staff_primary: 1024 total in 1 blocks; 152 free (0 chunks); 872 used
staff_desk_idx: 1024 total in 1 blocks; 152 free (0 chunks); 872 used
person_primary: 1024 total in 1 blocks; 152 free (0 chunks); 872 used
person_person_workphone_idx: 1024 total in 1 blocks; 152 free (0
chunks); 872 used
person_person_mobilephone_idx: 1024 total in 1 blocks; 152 free (0
chunks); 872 used
person_person_lastname_idx: 1024 total in 1 blocks; 152 free (0
chunks); 872 used
person_person_homephone_idx: 1024 total in 1 blocks; 152 free (0
chunks); 872 used
person_person_homeofficephone_idx: 1024 total in 1 blocks; 152 free (0
chunks); 872 used
person_person_firstname_idx: 1024 total in 1 blocks; 152 free (0
chunks); 872 used
agent_primary: 1024 total in 1 blocks; 152 free (0 chunks); 872 used
agent_membertype_id: 1024 total in 1 blocks; 152 free (0 chunks); 872
used
agent_floydtraining_id: 1024 total in 1 blocks; 152 free (0 chunks);
872 used
agent_agent_sync_realtorid_idx: 1024 total in 1 blocks; 152 free (0
chunks); 872 used
agent_agent_sync_oagentid_idx: 1024 total in 1 blocks; 152 free (0
chunks); 872 used
agent_agent_personid_idx: 1024 total in 1 blocks; 152 free (0 chunks);
872 used
agent_agent_license_idx: 1024 total in 1 blocks; 152 free (0 chunks);
872 used
pg_toast_2619_index: 1024 total in 1 blocks; 16 free (0 chunks); 1008
used
mls_agent_uid_idx: 1024 total in 1 blocks; 152 free (0 chunks); 872 used
mls_res_uidprp_idx: 1024 total in 1 blocks; 152 free (0 chunks); 872
used
mls_res_uid_idx: 1024 total in 1 blocks; 152 free (0 chunks); 872 used
mls_res_streetnum_idx: 1024 total in 1 blocks; 152 free (0 chunks); 872
used
mls_res_streetname: 1024 tota

Re: [GENERAL] Question about memory usage

2014-01-10 Thread Steve Atkins

On Jan 10, 2014, at 8:35 AM, Preston Hagar  wrote:

> tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors despite 
> the server now having 32 GB instead of 4 GB of RAM and the workload and 
> number of clients remaining the same.
> 
> 
> Details:
> 
> We have been using Postgresql for some time internally with much success.  
> Recently, we completed a migration off of an older server running 8.3 to a 
> new server running 9.3.  The older server had 4GB of RAM, the new server has 
> 32 GB. 
> 
> For some reason, since migrating we are getting lots of "out of memory" and 
> "cannot allocate memory" errors on the new server when the server gets under 
> a decent load.  We have upped shmmax to 17179869184 and shmall to 4194304.  

What are the exact error messages you’re getting, and where are you seeing them?

> 
> We had originally copied our shared_buffers, work_mem, wal_buffers and other 
> similar settings from our old config, but after getting the memory errors 
> have tweaked them to the following:
> 
> shared_buffers= 7680MB  
> temp_buffers  = 12MB
> max_prepared_transactions = 0
> work_mem  = 80MB
> maintenance_work_mem  = 1GB 
> wal_buffers = 8MB 
> max_connections = 350 
> 
> The current settings seem to have helped, but we are still occasionally 
> getting the errors.
> 
> The weird thing is that our old server had 1/8th the RAM, was set to 
> max_connections = 600 and had the same clients connecting in the same way to 
> the same databases and we never saw any errors like this in the several years 
> we have been using it.
> 
> One issue I could see is that one of our main applications that connects to 
> the database, opens a connection on startup, holds it open the entire time it 
> is running, and doesn't close it until the app is closed.  In daily usage, 
> for much of our staff it is opened first thing in the morning and left open 
> all day (meaning the connection is held open for 8+ hours).  This was never 
> an issue with 8.3, but I know it isn't a "best practice" in general.

That’s probably not related to the problems you’re seeing - I have apps that 
hold a connection to the database open for years. As long as it doesn’t keep a 
transaction open for a long time, you’re fine.

> 
> We are working to update our application to be able to use pgbouncer with 
> transaction connections to try to alleviate the long held connections, but it 
> will take some time.

Using pgbouncer is probably a good idea - to reduce the number of concurrent 
connections, rather than the length of connections, though.

> 
> In the meantime, is there some other major difference or setting in 9.3 that 
> we should look out for that could be causing this?  Like I said, the same 
> database with the same load and number of clients has been running on a 8.3 
> install for years (pretty much since 2008 when 8.3 was released) with lesser 
> hardware with no issues.
> 
> Let me know if any other information would help out or if anyone has 
> suggestions of things to check.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general