Re: [GENERAL] Memory consumption for Query

2017-04-24 Thread dhaval jaiswal
>> Other operations don't really consume much memory.


Is there any way to find out that as well.


>> You can run "EXPLAIN (ANALYZE) SELECT ..." to see how much memory is used
for memory intense operations like sort, hash or materialize.

I am aware of it.





Sent from Outlook



From: Albe Laurenz 
Sent: Monday, April 24, 2017 12:54 PM
To: 'dhaval jaiswal *EXTERN*'; PostgreSQL General
Subject: RE: Memory consumption for Query

dhaval jaiswal wrote:
> How to check how much memory query is consuming.
>
> Is there tool can check of query consuming memory for the execution or output.
>
> Let's say for following query how to calculate memory consumption.
>
> select * from test where id=1;

That query will not consume memory worth mention unless
"test" is a non-trivial view.

You can run "EXPLAIN (ANALYZE) SELECT ..." to see how much memory is used
for memory intense operations like sort, hash or materialize.

Other operations don't really consume much memory.

Yours,
Laurenz Albe


Re: [GENERAL] Memory consumption for Query

2017-04-24 Thread Albe Laurenz
dhaval jaiswal wrote:
> How to check how much memory query is consuming.
> 
> Is there tool can check of query consuming memory for the execution or output.
> 
> Let's say for following query how to calculate memory consumption.
> 
> select * from test where id=1;

That query will not consume memory worth mention unless
"test" is a non-trivial view.

You can run "EXPLAIN (ANALYZE) SELECT ..." to see how much memory is used
for memory intense operations like sort, hash or materialize.

Other operations don't really consume much memory.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-14 Thread AMatveev
Hi

>I think the "problem" that he is having is fixable only by changing how 
>PostgreSQL itself works.
>His problem is a PL/pgSQL function which is 11K lines in length.
>When invoked, this function is "compiled" into a large tokenized parse tree.
>This parse tree is only usable in the session which invoked the the function.
>Apparently this parse tree takes a lot of memory. And "n" concurrent users of 
>this,
>highly used, function will therefore require "n" times as much memory because 
>the parse tree is _not_ shareable.
>This is explained in:
>https://www.postgresql.org/docs/9.5/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING


There   is   an   interesting   continuation   of  the  discussion  at
pgsql-hack...@postgresql.org  ([HACKERS]  One process per session lack
of sharing)

https://www.postgresql.org/message-id/flat/301417900.20160712173911%40bitec.ru#301417900.20160712173...@bitec.ru

>I agree that there's not really a plan to implement this, but I don't
>agree that connection pooling solves the whole problem.  Most people
>can't get by with statement pooling, so in practice you are looking at
>transaction pooling or session pooling.  And that means that you can't
>really keep the pool size as small as you'd like because backends can
>be idle in transaction for long enough to force the pool size to be
>pretty large.  Also, pooling causes the same backends to get reused
>for different sessions which touch different relations and different
>functions so that, for example, the relcache and the PL/pgsql function
>caches grow until every one of those sessions has everything cached
>that any client needs.  That can cause big problems.
>
>So, I actually think it would be a good idea to think about this.  The
>problem, of course, is that as long as we allow arbitrary parts of the
>code - including extension code - to declare global variables and
>store arbitrary stuff in them without any coordination, it's
>impossible to imagine hibernating and resuming a session without a
>risk of things going severely awry.  This was a major issue for
>parallel query, but we've solved it, mostly, by designating the things
>that rely on global variables as parallel-restricted, and there
>actually aren't a ton of those.  So I think it's imaginable that we
>can get to a point where we can, at least in some circumstances, let a
>backend exit and reconstitute its state at a later time.  It's not an
>easy project, but I think it is one we will eventually need to do.
>Insisting that the current model is working is just sticking our head
>in the sand.  It's mostly working, but there are workloads where it
>fails badly - and competing database products survive a number of
>scenarios where we just fall on our face.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-11 Thread AMatveev
Hi

> On Jul 08, John McKown modulated:
> ...
>> I think the "problem" that he is having is fixable only by changing how
>> PostgreSQL itself works. His problem is a PL/pgSQL function which is
>> 11K lines in length. When invoked, this function is "compiled" into a
>> large tokenized parse tree. This parse tree is only usable in the
>> session which invoked the the function. Apparently this parse tree
>> takes a lot of memory.

It's  pleasant to see smart men, And I got here some help.
The most valuable is advice to deploy PgPool or PgBouncer.
Thanks guys. Usually, I don't answer on certain theme.
http://www.joelonsoftware.com/articles/fog18.html
But may be it helps to somebody.

> Right.  I'm not entirely sure the original poster wants to hear
> practical solutions,
Of course this answer requires "moral justification" :)

> but I think there are three ways that someone
> familar with Postgres would avoid this type of problem:

> 1. Connection pooling. Set the max connection limit and other tuning
>parameters appropriately for your workload and available
>resources. Don't get into a state as was described (thousands of
>open connections and only hundreds "active"). Make your client
>applications more disciplined.
You are joking :)
They pay us money, and they want easy and cheap decision :))
I can tell a terrible thing, but they can pay money to others :)))
> 2. Buy more RAM.  You can easily grow to 512GB in just basic
>dual-socket servers these days.  This hardware cost may well 
>be worth it to avoid human labor costs.
Thanks cap :))
http://www.reduxsquad.com/wp-content/uploads/2016/02/hotels-com-spring-break-sale-captain-obvious-workout-bathroom-large-8.jpg
> 3. Rewrite or refactor such complex stored procedures in a different
>programming language such as C or Python, so your PL/pgsql stored
>procedures remain small glue around libraries of code.  Postgres
>makes it very trivial to extend the system with such procedural
>libraries.
>
Let's read first :)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread John R Pierce

On 7/8/2016 12:00 PM, Karl Czajkowski wrote:

3. Rewrite or refactor such complex stored procedures in a different
programming language such as C or Python, so your PL/pgsql stored
procedures remain small glue around libraries of code.  Postgres
makes it very trivial to extend the system with such procedural
libraries.


4.   move said business logic to app servers to unload the database 
server from doing so much compute, and get better all around 
performance.   use plpgsql functions for inner functions only where 
there are significant performance gains.its easy to cluster app 
servers, its not so easy to parallelize database servers.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Karl Czajkowski
On Jul 08, John McKown modulated:
...
> I think the "problem" that he is having is fixable only by changing how
> PostgreSQL itself works. His problem is a PL/pgSQL function which is
> 11K lines in length. When invoked, this function is "compiled" into a
> large tokenized parse tree. This parse tree is only usable in the
> session which invoked the the function. Apparently this parse tree
> takes a lot of memory.

Right.  I'm not entirely sure the original poster wants to hear
practical solutions, but I think there are three ways that someone
familar with Postgres would avoid this type of problem:

1. Connection pooling. Set the max connection limit and other tuning
   parameters appropriately for your workload and available
   resources. Don't get into a state as was described (thousands of
   open connections and only hundreds "active"). Make your client
   applications more disciplined.

2. Buy more RAM.  You can easily grow to 512GB in just basic
   dual-socket servers these days.  This hardware cost may well 
   be worth it to avoid human labor costs.

3. Rewrite or refactor such complex stored procedures in a different
   programming language such as C or Python, so your PL/pgsql stored
   procedures remain small glue around libraries of code.  Postgres
   makes it very trivial to extend the system with such procedural
   libraries.
   

Karl


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Pavel Stehule
2016-07-08 17:49 GMT+02:00 :

> Hi
>
>
> >> Oracle: about 5M
> >> postgreSql: about 160М
>
>
>
> >The almost session memory is used for catalog caches. So you should to
> have big catalog and long living sessions.
>
> >What do you do exactly?
>
> I've generate test code that  emulates instruction tree size for our
> production code.
> This test shows:
> -What is the size of instruction tree for our typical  BP
>   it's greater than 300M for each session
> -How often do PostgreSql parse the text
>   When postgres clean cache, so much often
>

PostgreSQL parses the source code of functions once per session


>
>
> So Oracle is much better in this case.
> It's very difficult really estimate in such case, to buy Oracle or to by
> hardware.
>

Should be - The implementation of PLpgSQL and PL/SQL is strongly different.
When your case is not usual, then the migration to Postgres needs redesign
sometimes.

Regards

Pavel


Re: [GENERAL] Memory usage per session

2016-07-08 Thread John McKown
On Fri, Jul 8, 2016 at 11:26 AM, Melvin Davidson 
wrote:

>
>
> On Fri, Jul 8, 2016 at 11:49 AM,  wrote:
>
>> Hi
>>
>>
>> >> Oracle: about 5M
>> >> postgreSql: about 160М
>>
>>
>>
>> >The almost session memory is used for catalog caches. So you should to
>> have big catalog and long living sessions.
>>
>> >What do you do exactly?
>>
>> I've generate test code that  emulates instruction tree size for our
>> production code.
>> This test shows:
>> -What is the size of instruction tree for our typical  BP
>>   it's greater than 300M for each session
>> -How often do PostgreSql parse the text
>>   When postgres clean cache, so much often
>>
>> So Oracle is much better in this case.
>> It's very difficult really estimate in such case, to buy Oracle or to by
>> hardware.
>>
>>
>
> *My questions:*>What is the actual O/S that PostgreSQL is installed on?
> >How much total memory is on the server?
> >I would be very curious about the values you have  specified in
> postgresql.conf?
> > Also, what is the exact version of PostgreSQL you are using?
> >What is the total time to complete the test for all 3 DB's?
> >The best I can tell is that with all the unknowns, you are comparing
> apples to oranges.
>
> *Your answers:*
> >There is real problem for us.
> >The PL/pgSQL interpreter parses the function's source text and produces
> an internal binary instruction tree the first time the function is called
> (within each session)
>
>
> *Your answer is jibberish and has nothing to do with my questions.*
>
> *Have you even tuned the postgresql.conf?*
> *You cannot fairly compare PostgreSQL with any other database unless you
> first tune it's postgres.conf.*
>
> *Melvin Davidson*
>

I think the "problem" that he is having is fixable only by changing how
PostgreSQL itself works. His problem is a PL/pgSQL function which is 11K
lines in length. When invoked, this function is "compiled" into a large
tokenized parse tree. This parse tree is only usable in the session which
invoked the the function. Apparently this parse tree takes a lot of memory.
And "n" concurrent users of this, highly used, function will therefore
require "n" times as much memory because the parse tree is _not_
shareable.  This is explained in:
​​
https://www.postgresql.org/docs/9.5/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING

​In previous posts, he implied that he is running on some version of
Windows by referencing the VC compiler. I am _guessing_ that the other DBs
mentioned: MSSQL and Oracle implement their server side programming
differently so that it takes less memory. Perhaps by allowing the "compiled
program" to be shared between session.




-- 
"Pessimism is a admirable quality in an engineer. Pessimistic people check
their work three times, because they're sure that something won't be right.
Optimistic people check once, trust in Solis-de to keep the ship safe, then
blow everyone up."
"I think you're mistaking the word optimistic for inept."
"They've got a similar ring to my ear."

>From "Star Nomad" by Lindsay Buroker:

Maranatha! <><
John McKown


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Melvin Davidson
On Fri, Jul 8, 2016 at 11:49 AM,  wrote:

> Hi
>
>
> >> Oracle: about 5M
> >> postgreSql: about 160М
>
>
>
> >The almost session memory is used for catalog caches. So you should to
> have big catalog and long living sessions.
>
> >What do you do exactly?
>
> I've generate test code that  emulates instruction tree size for our
> production code.
> This test shows:
> -What is the size of instruction tree for our typical  BP
>   it's greater than 300M for each session
> -How often do PostgreSql parse the text
>   When postgres clean cache, so much often
>
> So Oracle is much better in this case.
> It's very difficult really estimate in such case, to buy Oracle or to by
> hardware.
>
>

*My questions:*>What is the actual O/S that PostgreSQL is installed on?
>How much total memory is on the server?
>I would be very curious about the values you have  specified in
postgresql.conf?
> Also, what is the exact version of PostgreSQL you are using?
>What is the total time to complete the test for all 3 DB's?
>The best I can tell is that with all the unknowns, you are comparing
apples to oranges.

*Your answers:*
>There is real problem for us.
>The PL/pgSQL interpreter parses the function's source text and produces an
internal binary instruction tree the first time the function is called
(within each session)


*Your answer is jibberish and has nothing to do with my questions.*

*Have you even tuned the postgresql.conf?*
*You cannot fairly compare PostgreSQL with any other database unless you
first tune it's postgres.conf.*

*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Title: Re: [GENERAL] Memory usage per session


Hi


>> Oracle: about 5M
>> postgreSql: about 160М



>The almost session memory is used for catalog caches. So you should to have big catalog and long living sessions. 

>What do you do exactly?

I've generate test code that  emulates instruction tree size for our production code.
This test shows:
-What is the size of instruction tree for our typical  BP  
  it's greater than 300M for each session 
-How often do PostgreSql parse the text 
  When postgres clean cache, so much often 

So Oracle is much better in this case. 
It's very difficult really estimate in such case, to buy Oracle or to by hardware.






Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Title: Re: [GENERAL] Memory usage per session


Hi

>> >> The test performs about 11K lines of code
>> >> Oracle: about 5M
>> >> postgreSql: about 160М


>What is the actual O/S that PostgreSQL is installed on? 
>How much total memory is on the server? 
>I would be very curious about the values you have  specified in postgresql.conf?
> Also, what is the exact version of PostgreSQL you are using?
>What is the total time to complete the test for all 3 DB's?
>The best I can tell is that with all the unknowns, you are comparing apples to oranges.

There is real problem for us.
The PL/pgSQL interpreter parses the function's source text and produces an internal binary instruction tree the first time the function is called (within each session) 

The size of this instruction tree depends only of postgreSql build.

And size is big, and size is not share between session.

I understand that nobody says: Damn One second I will implement it soon :)

Thank all for constructive answers.




Re: [GENERAL] Memory usage per session

2016-07-08 Thread Achilleas Mantzios

On 08/07/2016 17:07, amatv...@bitec.ru wrote:

Hi


The test performs about 11K lines of code
Oracle: about 5M
postgreSql: about 160М



Do you have 100 CPUs on this system which apparently doesn't have 16G
of RAM available for PG to use?

We can say at fact:
We currently  work at oracle.
Our code base about 4000 k line of code
In out last project we have:
3000 current connection
200 active session
So 16g it's very optimistic.
Of course  we think about buy hardware or software.
It's other question.
So with this memory consumption it can be really cheaper to by Oracle.

If not, you should probably consider connection pooling to reduce the
number of PG sessions to something approaching the number of CPUs/cores
you have in the system.

It's  possible only  with  application  server,


No, you can deploy PgPool or PgBouncer.
Apart from that, I just checked in my system. User sessions have size of 16M. 
Not 160M.


for local network thick client has reasonable advantages.
  We just can't implement today all functions on thin client.






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Melvin Davidson
On Fri, Jul 8, 2016 at 10:07 AM,  wrote:

> Hi
>
> >> >> The test performs about 11K lines of code
> >> >> Oracle: about 5M
> >> >> postgreSql: about 160М
>
>
> > Do you have 100 CPUs on this system which apparently doesn't have 16G
> > of RAM available for PG to use?
> We can say at fact:
> We currently  work at oracle.
> Our code base about 4000 k line of code
> In out last project we have:
> 3000 current connection
> 200 active session
> So 16g it's very optimistic.
> Of course  we think about buy hardware or software.
> It's other question.
> So with this memory consumption it can be really cheaper to by Oracle.
> > If not, you should probably consider connection pooling to reduce the
> > number of PG sessions to something approaching the number of CPUs/cores
> > you have in the system.
> It's  possible only  with  application  server,
> for local network thick client has reasonable advantages.
>  We just can't implement today all functions on thin client.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

What is the actual O/S that PostgreSQL is installed on?
How much total memory is on the server?
I would be very curious about the values you have  specified in
postgresql.conf?
 Also, what is the exact version of PostgreSQL you are using?
What is the total time to complete the test for all 3 DB's?
The best I can tell is that with all the unknowns, you are comparing apples
to oranges.

*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Title: Re: [GENERAL] Memory usage per session


Здравствуйте.

>> Oracle: about 5M
>> postgreSql: about 160М


>​I'm admittedly ignorant of this type of testing. But if the memory usage for PostgreSQL is in the server, perhaps due to caching (how to test?), then it likely would _not_ linearly scale up >as the number of clients increased because every clients >would share the same cache data within the server.​ Or are you measuring the memory usage where the client is on one machine and the >PostgreSQL server is a different machine, with the client machine getting the memory hit?

I can send zip file with test code on request;
To say the truth it's described in documentation:
https://www.postgresql.org/docs/9.5/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING
The PL/pgSQL interpreter parses the function's source text and produces an internal binary instruction tree the first time the function is called (within each session) 

It's very sad :(

The test is very simply:
Generate code:
DECLARE
  svSql "varchar";
BEGIN
  for nvi in 1..10
  loop
    svSql = 'CREATE OR REPLACE FUNCTION perfa."func'||nvi||'" (
)
RETURNS void AS
$body$
DECLARE
  svSql "varchar";
BEGIN
  svSql:=;
  PERFORM perfb."func'||(nvi-1)*10+1||'"();
  PERFORM perfb."func'||(nvi-1)*10+2||'"();
  PERFORM perfb."func'||(nvi-1)*10+3||'"();
  PERFORM perfb."func'||(nvi-1)*10+4||'"();
  PERFORM perfb."func'||(nvi-1)*10+5||'"();
  PERFORM perfb."func'||(nvi-1)*10+6||'"();
  PERFORM perfb."func'||(nvi-1)*10+7||'"();
  PERFORM perfb."func'||(nvi-1)*10+8||'"();
  PERFORM perfb."func'||(nvi-1)*10+9||'"();
  PERFORM perfb."func'||(nvi-1)*10+10||'"();         
END;
$body$
LANGUAGE ''plpgsql''
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER';
    EXECUTE svSql;
  end loop;
END;

Download jmetter
In 50 thread run this funciton.


About 4g memory will be consumed in one second 
It's hard to miss
:))






Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Hi

>> >> The test performs about 11K lines of code
>> >> Oracle: about 5M
>> >> postgreSql: about 160М


> Do you have 100 CPUs on this system which apparently doesn't have 16G
> of RAM available for PG to use?
We can say at fact:
We currently  work at oracle.
Our code base about 4000 k line of code
In out last project we have:
3000 current connection
200 active session
So 16g it's very optimistic.
Of course  we think about buy hardware or software.
It's other question.
So with this memory consumption it can be really cheaper to by Oracle.
> If not, you should probably consider connection pooling to reduce the
> number of PG sessions to something approaching the number of CPUs/cores
> you have in the system.
It's  possible only  with  application  server,
for local network thick client has reasonable advantages.
 We just can't implement today all functions on thin client.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Stephen Frost
* amatv...@bitec.ru (amatv...@bitec.ru) wrote:
> > On 08/07/2016 14:11, amatv...@bitec.ru wrote:
> >> The test performs about 11K lines of code
> >> Memory usage per session:
> >> Oracle: about 5M
> >> MSSqlServer: about 4M
> >> postgreSql: about 160М
> 
> > Visual C???
> > You will have to run PostgreSQL on a proper Unix system to test for 
> > performance.
> Of cause we understand that unix is faster( damn fork :)))
> Our Current problem is memory:
>  (160m vs 5M) * 100 sessions  = 16G vs 0.5G
> We just can get "out of memory".
> :(((

Do you have 100 CPUs on this system which apparently doesn't have 16G
of RAM available for PG to use?

If not, you should probably consider connection pooling to reduce the
number of PG sessions to something approaching the number of CPUs/cores
you have in the system.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Memory usage per session

2016-07-08 Thread John McKown
On Fri, Jul 8, 2016 at 8:16 AM,  wrote:

> Hi
>
> > On 08/07/2016 14:11, amatv...@bitec.ru wrote:
> >> Hi.
> >> The test performs about 11K lines of code
> >> Memory usage per session:
> >> Oracle: about 5M
> >> MSSqlServer: about 4M
> >> postgreSql: about 160М
>
>
> > Visual C???
> > You will have to run PostgreSQL on a proper Unix system to test for
> performance.
> Of cause we understand that unix is faster( damn fork :)))
> Our Current problem is memory:
>  (160m vs 5M) * 100 sessions  = 16G vs 0.5G
> We just can get "out of memory".
> :(((
>

​I'm admittedly ignorant of this type of testing. But if the memory usage
for PostgreSQL is in the server, perhaps due to caching (how to test?),
then it likely would _not_ linearly scale up as the number of clients
increased because every clients would share the same cache data within the
server.​ Or are you measuring the memory usage where the client is on one
machine and the PostgreSQL server is a different machine, with the client
machine getting the memory hit?

-- 
"Pessimism is a admirable quality in an engineer. Pessimistic people check
their work three times, because they're sure that something won't be right.
Optimistic people check once, trust in Solis-de to keep the ship safe, then
blow everyone up."
"I think you're mistaking the word optimistic for inept."
"They've got a similar ring to my ear."

>From "Star Nomad" by Lindsay Buroker:

Maranatha! <><
John McKown


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Pavel Stehule
Hi

2016-07-08 15:16 GMT+02:00 :

> Hi
>
> > On 08/07/2016 14:11, amatv...@bitec.ru wrote:
> >> Hi.
> >> The test performs about 11K lines of code
> >> Memory usage per session:
> >> Oracle: about 5M
> >> MSSqlServer: about 4M
> >> postgreSql: about 160М
>
>
> > Visual C???
> > You will have to run PostgreSQL on a proper Unix system to test for
> performance.
> Of cause we understand that unix is faster( damn fork :)))
> Our Current problem is memory:
>  (160m vs 5M) * 100 sessions  = 16G vs 0.5G
> We just can get "out of memory".
> :(((
>

The almost session memory is used for catalog caches. So you should to have
big catalog and long living sessions.

What do you do exactly?

Regards

Pavel

>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Hi

> On 08/07/2016 14:11, amatv...@bitec.ru wrote:
>> Hi.
>> The test performs about 11K lines of code
>> Memory usage per session:
>> Oracle: about 5M
>> MSSqlServer: about 4M
>> postgreSql: about 160М


> Visual C???
> You will have to run PostgreSQL on a proper Unix system to test for 
> performance.
Of cause we understand that unix is faster( damn fork :)))
Our Current problem is memory:
 (160m vs 5M) * 100 sessions  = 16G vs 0.5G
We just can get "out of memory".
:(((



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Achilleas Mantzios

On 08/07/2016 14:11, amatv...@bitec.ru wrote:

Hi.
We have tested postgreSql,Oracle,MSSqlServer.
The test performs about 11K lines of code
Memory usage per session:
Oracle: about 5M
MSSqlServer: about 4M
postgreSql: about 160М
The result of postgreSql is very sad(Our typical business logic has about 30K 
lines of code).
How can I reduce memory consumption per session?
Note, we cant move the business logic to an application server as it will lead 
us to performance problem.

I can send the test script on request.

Test description:
PostgreSQL 9.5.3, compiled by Visual C build 1800, 64-bit


Visual C???
You will have to run PostgreSQL on a proper Unix system to test for performance.


At the same time we run 50 sessions that perform the following functions:
CREATE OR REPLACE FUNCTION perfa.func9
...
BEGIN
   svSql:='';
   PERFORM perfb."func91"();
   ...
END;

CREATE OR REPLACE FUNCTION perfb.func91
...
BEGIN
   PERFORM perfc."func911"();
   ...
END;

CREATE OR REPLACE FUNCTION perfc.func911 (
)
RETURNS void AS
$body$
DECLARE
   svSql BIGINT;
BEGIN
   svSql:=0;
   ...
   svSql:=svSql+10;
END;
$body$






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] memory problem with refresh materialized view

2016-02-12 Thread Albe Laurenz
Enrico Pirozzi wrote:
> I have a new postgresql 9.5.0 installation on a new virtual server debian 8.3 
> x64 with 4gb RAM, I have
> compiled postgresql from source.
> 
> When I import a dump with materialized views I see that postgres process 
> takes about all 4 Gb and then
> I have this error
> 
>  fork: Cannot allocate memory

What are the memory related settings of the database into which you restore?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-14 Thread Adrian Klaver

On 12/13/2015 01:23 PM, Gerhard Wiesinger wrote:

On 13.12.2015 21:14, Bill Moran wrote:

Wait ... this is a combined HTTP/Postgres server? You didn't mention that
earlier, and it's kind of important.

What evidence do you have that Postgres is actually the part of
this system running out of memory?


For me the complete picture doesn't look consistent.


I don't see any such evidence in any of
your emails, and (based on experience) I find it pretty likely that
whatever
is running under node is doing something in a horrifically
memory-inefficient
manner. Since you mention that you see nothing in the PG logs, that
makes it
even more likely (to me) that you're looking entirely in the wrong place.

I'd be willing to bet a steak dinner that if you put the web server on a
different server than the DB, that the memory problems would follow the
web server and not the DB server.


Changes in config:
track_activity_query_size = 102400
work_mem = 100MB

Ok, we restarted PostgreSQL and had it stopped for seconds, and logged
top every second:

When PostgreSQL was down nearly all memory was freed, looks good to me.
So it is likely that node and other processes are not the cause.
Mem: 742M Active, 358M Inact, 1420M Wired, 21G Cache, 871M Buf, 8110M Free
Swap: 512M Total, 477M Used, 35M Free, 93% Inuse

When PostgreSQL restarted, Inactive was growing fast (~1min):
Mem: 7998M Active, 18G Inact, 2763M Wired, 1766M Cache, 1889M Buf, 1041M
Free
Swap: 512M Total, 472M Used, 41M Free, 92% Inuse

After some few minutes we are back again at the same situation:
Mem: 8073M Active, 20G Inact, 2527M Wired, 817M Cache, 1677M Buf, 268M Free
Swap: 512M Total, 472M Used, 41M Free, 92% Inuse

The steak dinner is mine :-) Donating to the PostgreSQL community :-)


To me all the above proves is that this a complete system issue and only 
with all the parts running do you get a problem. It still does indicate 
which part or interaction of parts is the issue. This is further muddied 
by no description of what, if anything, you where doing in the above 
scenarios.




Any further ideas, I don't think this is normal system behaviour.

Ciao,
Gerhard






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Gerhard Wiesinger

Hello,

some further details from the original FreeBSD 10.1 machine:

Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse

  PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIME WCPU COMMAND
77941 pgsql 5  200  7925M  7296M usem2 352:34 6.98% 
postgres: username dbnamee 127.0.0.1(43367)  (postgres)
75862 pgsql 1  200  7321M  7266M select  4   7:04 0.00% 
postgres: checkpointer process(postgres)
75863 pgsql 1  200  7321M  7260M select  4   3:34 0.00% 
postgres: writer process(postgres)
75867 pgsql 1  200  7323M  7237M sbwait  0   3:19 0.00% 
postgres: username dbnamee ipaddress(58977)  (postgres)
76178 pgsql 1  200  7323M  7237M sbwait  2   3:18 0.00% 
postgres: username dbnamee ipaddress(35807)  (postgres)
75868 pgsql 1  200  7323M  7237M sbwait  0   3:20 0.00% 
postgres: username dbnamee ipaddress(58978)  (postgres)
75884 pgsql 1  200  7323M  7236M sbwait  3   3:20 0.00% 
postgres: username dbnamee ipaddress(59908)  (postgres)
75869 pgsql 1  200  7323M  7236M sbwait  3   3:20 0.00% 
postgres: username dbnamee ipaddress(58982)  (postgres)
76181 pgsql 1  200  7323M  7236M sbwait  2   3:18 0.00% 
postgres: username dbnamee ipaddress(35813)  (postgres)
75883 pgsql 1  200  7323M  7236M sbwait  0   3:20 0.00% 
postgres: username dbnamee ipaddress(59907)  (postgres)
76180 pgsql 1  200  7323M  7236M sbwait  1   3:19 0.00% 
postgres: username dbnamee ipaddress(35811)  (postgres)
76177 pgsql 1  200  7323M  7236M sbwait  1   3:18 0.00% 
postgres: username dbnamee ipaddress(35712)  (postgres)
76179 pgsql 1  200  7323M  7236M sbwait  5   3:18 0.00% 
postgres: username dbnamee ipaddress(35810)  (postgres)
64951 pgsql 1  750  7375M   662M CPU11   0:01 11.47% 
postgres: username dbnamee 127.0.0.1(32073)  (postgres)
64950 pgsql 1  770  7325M   598M CPU66   0:02 16.55% 
postgres: username dbnamee 127.0.0.1(44251)  (postgres)


Out of memory:
kernel: swap_pager_getswapspace(4): failed
kernel: swap_pager_getswapspace(8): failed
kernel: swap_pager_getswapspace(3): failed

Main issue is IHMO (as far as I understood the FreeBSD Memory system) 
that 20G are INACTIVE. When I subtract the shared memory, even ~13GB 
should be available, but they are still allocated but inactive 
(INACTIVE). INACTIVE memory might be clean or dirty. As we get into out 
of memory situations it is likely that the memory is dirty (otherwise it 
would have been reused).


Config:
Memory: 32GB, Swap: 512MB

maintenance_work_mem = 512MB
effective_cache_size = 10GB
work_mem = 892MB
wal_buffers = 8MB
checkpoint_segments = 16
shared_buffers = 7080MB
max_connections = 80
autovacuum_max_workers = 3

Thnx.

Ciao,
Gerhard


On 13.12.2015 08:49, Gerhard Wiesinger wrote:

Hello,

We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple 
worker processes connected via persistent connections to PostgreSQL, 
they perform just simple queries with SELECT on primary keys and 
simple INSERTS/UPDATES. Normally nearly all the workers are idle but 
they still consume the maximum configured work mem on the PostgreSQL 
server and the memory is also resident. If some other queries get in 
we get into out of memory situations. So it looks like PostgreSQL has 
memory leaks.


I found a test scenario to reproduce it also on a newer FreeBSD 10.2 
VM as well as in a Fedora 23 VM (both with PostgreSQL 9.4.5):


Executions in psql with one persisent connection:
-- Create the table
CREATE TABLE t_random AS SELECT s, md5(random()::text) FROM 
generate_Series(1,1) s;

-- Create the index
CREATE INDEX ON t_random(s);

-- Restart psql with a new connection:

-- Memory goes slighty up after each execution even after canceling:
-- Memory leak on FreeBSD 10.2/Fedora 23 and PostgreSQL 9.4.5 on 
cancel the query or multiple execution

SELECT * FROM t_random ORDER BY md5 LIMIT 10;

-- Therefore I created a function:
CREATE OR REPLACE FUNCTION execmultiplei(IN num int8)
RETURNS void AS $$
BEGIN
  -- RAISE NOTICE 'num=%', num;
  FOR i IN 1..num LOOP
PERFORM * FROM t_random WHERE s = i;
  END LOOP;
END;
$$  LANGUAGE plpgsql;

-- Test it several times
SELECT execmultiplei(1000);

-- Linux testing (FreeBSD is similar), relevant part is RES (resident 
memory):

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
-- after startup of psql
26851 postgres  20   0 2363276   7432   6292 S   0.0  0.2 0:00.00 
postgres: postgres postgres [local] idle

-- Memory goes up, ok so far
26851 postgres  20   0 2365732 255152 253548 R  99.0  6.3 0:10.77 
postgres: postgres postgres [local] SELECT
26851 postgres  20   0 2365732 408464 406788 R 100.0 10.1 0:17.81 
postgres: postgres postgres [local] SELECT
26851 postgres  20   0 2365732 864472 862576 R 100.0 21.4 0:38.90 
postgres: postgres 

Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Gerhard Wiesinger

Hello Bill,

Thank you for your response, comments inline:

On 13.12.2015 16:05, Bill Moran wrote:

On Sun, 13 Dec 2015 09:57:21 +0100
Gerhard Wiesinger  wrote:

some further details from the original FreeBSD 10.1 machine:

Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse

PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIME WCPU COMMAND
77941 pgsql 5  200  7925M  7296M usem2 352:34 6.98%
postgres: username dbnamee 127.0.0.1(43367)  (postgres)



I see no evidence of an actual leak here. Each process is basically using
the 7G of shared_buffers you have allocated in the config (which is only
7G _total_ for all processes, since it's shared memory)


OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?


Out of memory:
kernel: swap_pager_getswapspace(4): failed
kernel: swap_pager_getswapspace(8): failed
kernel: swap_pager_getswapspace(3): failed

Main issue is IHMO (as far as I understood the FreeBSD Memory system)
that 20G are INACTIVE. When I subtract the shared memory, even ~13GB
should be available, but they are still allocated but inactive
(INACTIVE). INACTIVE memory might be clean or dirty. As we get into out
of memory situations it is likely that the memory is dirty (otherwise it
would have been reused).

Not quite correct. Inactive memory is _always_ available for re-use.


Are you sure that's true?

Monitoring inactive memory:
cat vm_stat.sh
#!/usr/bin/env bash

while [ 1 ]; do
  date +%Y.%m.%d.%H.%M.%S
  sysctl -a | grep vm.stats.vm.
  sleep 1
done

And even we get out of memory with swap_pager_getswapspace Inactive 
Memory (from the log file) is around 20GB (doesn't go down or up)

vm.stats.vm.v_inactive_count: 5193530 (*4k pages is around 20GB)

Then we have 20GB inactive memory, but we still get out of memory with 
kernel: swap_pager_getswapspace(4): failed. Any ideas why?





Config:
Memory: 32GB, Swap: 512MB

Probably not the cause of this problem, but this is a non-optimal layout.
If you're going to use swap at all, it should generally be 2x the size of
your RAM for optimal performance under memory pressure.


Yes, we will try to disable it totally. Nevertheless why do we get out 
of memory/Swap?





maintenance_work_mem = 512MB
effective_cache_size = 10GB
work_mem = 892MB

I expect that this value is the cause of the problem. The scenario you
describe below is sorting a large table on an unindexed column, meaning
it will have to use all that work_mem. I'd be interested to see the
output of:

EXPLAIN ANALYZE SELECT * FROM t_random ORDER BY md5 LIMIT 10;


That was only a test query, has nothing to do with production based 
query. They are mostly SELECT/INSERTS/UPDATES on primary keys.



But even without that information, I'd recommend you reduce work_mem
to about 16M or so.


Why so low? E.g. sorting on reporting or some long running queries are 
then done on disk and not in memory.



wal_buffers = 8MB
checkpoint_segments = 16
shared_buffers = 7080MB
max_connections = 80
autovacuum_max_workers = 3

[snip]


We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple
worker processes connected via persistent connections to PostgreSQL,
they perform just simple queries with SELECT on primary keys and
simple INSERTS/UPDATES.

That's not at all the test scenario you show below. The scenario below
is a large sort operation on a non-indexed column, which is vastly
different than a single-row fetch based on an index.


Yes, that non indexed select was just for testing purporeses.




Normally nearly all the workers are idle but
they still consume the maximum configured work mem on the PostgreSQL
server and the memory is also resident.

I see no evidence of that in your top output. Each process has a
reference to the 7G of shared_buffers you allocated, which is memory
shared by all processes, and is expected. I'm not as familiar with
Linux top, but the results appear to be the same.


Yes, might be the case, but if it is n times 7G shared memory then we 
have ~20GB Inactive Memory available, so plenty of memory. And why are 
we getting: kernel: swap_pager_getswapspace(4): failed?


Thnx.

Ciao,
Gerhard



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Tom Lane
Gerhard Wiesinger  writes:
>> Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
>> Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse

> OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?

Just judging from the name of the function, I would bet this is a direct
result of having only 512M of swap configured.  As Bill already pointed
out, that's a pretty useless choice on a system with 32G of RAM.  As soon
as the kernel tries to push out any significant amount of idle processes,
it's gonna be out of swap space.  The numbers you show above prove that
it is almost out of free swap already.

Also, while that 20G of "inactive" pages may be candidates for reuse,
they probably can't actually be reused without swapping them out ...
and there's noplace for that data to go.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Bill Moran
On Sun, 13 Dec 2015 16:35:08 +0100
Gerhard Wiesinger  wrote:

> Hello Bill,
> 
> Thank you for your response, comments inline:
> 
> On 13.12.2015 16:05, Bill Moran wrote:
> > On Sun, 13 Dec 2015 09:57:21 +0100
> > Gerhard Wiesinger  wrote:
> >> some further details from the original FreeBSD 10.1 machine:
> >>
> >> Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
> >> Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse
> >>
> >> PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIME WCPU 
> >> COMMAND
> >> 77941 pgsql 5  200  7925M  7296M usem2 352:34 6.98%
> >> postgres: username dbnamee 127.0.0.1(43367)  (postgres)
> > 
> >
> > I see no evidence of an actual leak here. Each process is basically using
> > the 7G of shared_buffers you have allocated in the config (which is only
> > 7G _total_ for all processes, since it's shared memory)
> 
> OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?

You haven't provided enough information to isolate that cause yet. What's
in the Postgres log? Surely it will have logged something when its request
for RAM was denied, and it should be more informational than the OS'
generic message.

> >> Out of memory:
> >> kernel: swap_pager_getswapspace(4): failed
> >> kernel: swap_pager_getswapspace(8): failed
> >> kernel: swap_pager_getswapspace(3): failed
> >>
> >> Main issue is IHMO (as far as I understood the FreeBSD Memory system)
> >> that 20G are INACTIVE. When I subtract the shared memory, even ~13GB
> >> should be available, but they are still allocated but inactive
> >> (INACTIVE). INACTIVE memory might be clean or dirty. As we get into out
> >> of memory situations it is likely that the memory is dirty (otherwise it
> >> would have been reused).
> > Not quite correct. Inactive memory is _always_ available for re-use.
> 
> Are you sure that's true?

Yes. Read The Design and Implementation of FreeBSD for the details.

> Monitoring inactive memory:
> cat vm_stat.sh
> #!/usr/bin/env bash
> 
> while [ 1 ]; do
>date +%Y.%m.%d.%H.%M.%S
>sysctl -a | grep vm.stats.vm.
>sleep 1
> done
> 
> And even we get out of memory with swap_pager_getswapspace Inactive 
> Memory (from the log file) is around 20GB (doesn't go down or up)
> vm.stats.vm.v_inactive_count: 5193530 (*4k pages is around 20GB)
> 
> Then we have 20GB inactive memory, but we still get out of memory with 
> kernel: swap_pager_getswapspace(4): failed. Any ideas why?

Theory: If the planner decides it needs to do 30 sort operations for a
query, it will try to allocate 27G of RAM, which exceeds what's available,
and therefore never gets allocated. So you get the "out of space" message,
but the actual memory usage doesn't change.

> >> maintenance_work_mem = 512MB
> >> effective_cache_size = 10GB
> >> work_mem = 892MB
> > I expect that this value is the cause of the problem. The scenario you
> > describe below is sorting a large table on an unindexed column, meaning
> > it will have to use all that work_mem. I'd be interested to see the
> > output of:
> >
> > EXPLAIN ANALYZE SELECT * FROM t_random ORDER BY md5 LIMIT 10;
> 
> That was only a test query, has nothing to do with production based 
> query. They are mostly SELECT/INSERTS/UPDATES on primary keys.

Then provide the _actual_ queries and the EXPLAIN ANALYZE and the table
layouts and basic data distribution of the actual cause. If your test
case is completely non-representative of what's happening, then you're
not going to get useful answers.

> > But even without that information, I'd recommend you reduce work_mem
> > to about 16M or so.
> 
> Why so low? E.g. sorting on reporting or some long running queries are 
> then done on disk and not in memory.

Even a simple query could involve multiple sorts, and you're allowing
each sort to use up to 890M of RAM (which is _not_ shared). As noted
earlier, even a moderately complex query could exceed the available
RAM on the system. But since you don't provide the actual queries and
tables causing problems, I can only guess. And since you appear to
have already decided what the cause of the problem is, then crafted
completely non-relevent queries that you think prove your point, I'm
not sure there's anything I can do to help you.

> >> wal_buffers = 8MB
> >> checkpoint_segments = 16
> >> shared_buffers = 7080MB
> >> max_connections = 80
> >> autovacuum_max_workers = 3
> > [snip]
> >
> >>> We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple
> >>> worker processes connected via persistent connections to PostgreSQL,
> >>> they perform just simple queries with SELECT on primary keys and
> >>> simple INSERTS/UPDATES.
> > That's not at all the test scenario you show below. The scenario below
> > is a large sort operation on a non-indexed column, which is vastly
> > different than a single-row fetch based on an index.
> 
> Yes, that non indexed select was just for testing purporeses.


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Bill Moran
On Sun, 13 Dec 2015 09:57:21 +0100
Gerhard Wiesinger  wrote:
> 
> some further details from the original FreeBSD 10.1 machine:
> 
> Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
> Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse
> 
>PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIME WCPU COMMAND
> 77941 pgsql 5  200  7925M  7296M usem2 352:34 6.98% 
> postgres: username dbnamee 127.0.0.1(43367)  (postgres)



I see no evidence of an actual leak here. Each process is basically using
the 7G of shared_buffers you have allocated in the config (which is only
7G _total_ for all processes, since it's shared memory)

> Out of memory:
> kernel: swap_pager_getswapspace(4): failed
> kernel: swap_pager_getswapspace(8): failed
> kernel: swap_pager_getswapspace(3): failed
> 
> Main issue is IHMO (as far as I understood the FreeBSD Memory system) 
> that 20G are INACTIVE. When I subtract the shared memory, even ~13GB 
> should be available, but they are still allocated but inactive 
> (INACTIVE). INACTIVE memory might be clean or dirty. As we get into out 
> of memory situations it is likely that the memory is dirty (otherwise it 
> would have been reused).

Not quite correct. Inactive memory is _always_ available for re-use.

> Config:
> Memory: 32GB, Swap: 512MB

Probably not the cause of this problem, but this is a non-optimal layout.
If you're going to use swap at all, it should generally be 2x the size of
your RAM for optimal performance under memory pressure.

> maintenance_work_mem = 512MB
> effective_cache_size = 10GB
> work_mem = 892MB

I expect that this value is the cause of the problem. The scenario you
describe below is sorting a large table on an unindexed column, meaning
it will have to use all that work_mem. I'd be interested to see the
output of:

EXPLAIN ANALYZE SELECT * FROM t_random ORDER BY md5 LIMIT 10;

But even without that information, I'd recommend you reduce work_mem
to about 16M or so.

> wal_buffers = 8MB
> checkpoint_segments = 16
> shared_buffers = 7080MB
> max_connections = 80
> autovacuum_max_workers = 3

[snip]

> > We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple 
> > worker processes connected via persistent connections to PostgreSQL, 
> > they perform just simple queries with SELECT on primary keys and 
> > simple INSERTS/UPDATES.

That's not at all the test scenario you show below. The scenario below
is a large sort operation on a non-indexed column, which is vastly
different than a single-row fetch based on an index.

> > Normally nearly all the workers are idle but 
> > they still consume the maximum configured work mem on the PostgreSQL 
> > server and the memory is also resident.

I see no evidence of that in your top output. Each process has a
reference to the 7G of shared_buffers you allocated, which is memory
shared by all processes, and is expected. I'm not as familiar with
Linux top, but the results appear to be the same.

> > If some other queries get in 
> > we get into out of memory situations. So it looks like PostgreSQL has 
> > memory leaks.
> >
> > I found a test scenario to reproduce it also on a newer FreeBSD 10.2 
> > VM as well as in a Fedora 23 VM (both with PostgreSQL 9.4.5):
> >
> > Executions in psql with one persisent connection:
> > -- Create the table
> > CREATE TABLE t_random AS SELECT s, md5(random()::text) FROM 
> > generate_Series(1,1) s;
> > -- Create the index
> > CREATE INDEX ON t_random(s);
> >
> > -- Restart psql with a new connection:
> >
> > -- Memory goes slighty up after each execution even after canceling:
> > -- Memory leak on FreeBSD 10.2/Fedora 23 and PostgreSQL 9.4.5 on 
> > cancel the query or multiple execution
> > SELECT * FROM t_random ORDER BY md5 LIMIT 10;
> >
> > -- Therefore I created a function:
> > CREATE OR REPLACE FUNCTION execmultiplei(IN num int8)
> > RETURNS void AS $$
> > BEGIN
> >   -- RAISE NOTICE 'num=%', num;
> >   FOR i IN 1..num LOOP
> > PERFORM * FROM t_random WHERE s = i;
> >   END LOOP;
> > END;
> > $$  LANGUAGE plpgsql;
> >
> > -- Test it several times
> > SELECT execmultiplei(1000);
> >
> > -- Linux testing (FreeBSD is similar), relevant part is RES (resident 
> > memory):
> >   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> > -- after startup of psql
> > 26851 postgres  20   0 2363276   7432   6292 S   0.0  0.2 0:00.00 
> > postgres: postgres postgres [local] idle
> > -- Memory goes up, ok so far
> > 26851 postgres  20   0 2365732 255152 253548 R  99.0  6.3 0:10.77 
> > postgres: postgres postgres [local] SELECT
> > 26851 postgres  20   0 2365732 408464 406788 R 100.0 10.1 0:17.81 
> > postgres: postgres postgres [local] SELECT
> > 26851 postgres  20   0 2365732 864472 862576 R 100.0 21.4 0:38.90 
> > postgres: postgres postgres [local] SELECT
> > -- Function execmultiplei and transaction terminated, but memory still 
> > allocated!!!
> > 26851 postgres  20   0 

Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Gerhard Wiesinger

On 13.12.2015 18:17, Tom Lane wrote:

Gerhard Wiesinger  writes:

Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse

OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?

Just judging from the name of the function, I would bet this is a direct
result of having only 512M of swap configured.  As Bill already pointed
out, that's a pretty useless choice on a system with 32G of RAM.  As soon
as the kernel tries to push out any significant amount of idle processes,
it's gonna be out of swap space.  The numbers you show above prove that
it is almost out of free swap already.


The system wasn't designed by me, I wouldn't do it either that way. Does 
swapoff help?




Also, while that 20G of "inactive" pages may be candidates for reuse,
they probably can't actually be reused without swapping them out ...
and there's noplace for that data to go.


There is no log in syslog (where postgres log) when 
swap_pager_getswapspace is logged.


But why do we have 20G of Inactive pages? They are still allocated by 
kernel or user space. As you can see below (top output) NON Postgres 
processes are around 9G in virtual size, resident even lower. The system 
is nearly idle, and the queries typically aren't active after one second 
agin. Therefore where does the rest of the 11G of Inactive pages come 
from (if it isn't a Postgres/FreeBSD memory leak)?

I read that Postgres has it's own memory allocator:
https://www.reddit.com/r/programming/comments/18zija/github_got_30_better_performance_using_tcmalloc/
Might that be an issue with double allocation/freeing and the "cheese 
hole" topic with memory fragmentation?


https://www.opennet.ru/base/dev/fbsdvm.txt.html
inactivepages not actively used by programs which are
dirty and (at some point) need to be written
to their backing store (typically disk).
These pages are still associated with objects and
can be reclaimed if a program references them.
Pages can be moved from the active to the inactive
queue at any time with little adverse effect.
Moving pages to the cache queue has bigger
consequences (note 1)

https://unix.stackexchange.com/questions/134862/what-do-the-different-memory-counters-in-freebsd-mean
Active: Memory currently being used by a process
Inactive: Memory that has been freed but is still cached since it 
may be used again. If more Free memory is required, this memory can be 
cleared and become free. This memory is not cleared before it is needed, 
because "free memory is wasted memory", it doesn't cost anything to keep 
the old data around in case it is needed again.
Wired: Memory in use by the Kernel. This memory cannot be swapped 
out (GW: including ZFS cache!!!)
Cache: Memory being used to cache data, can be freed immediately if 
required

Buffers: Disk cache
Free: Memory that is completely free and ready to use. Inactive, 
Cache and Buffers can become free if they are cleaned up.


Thnx.

Ciao,
Gerhard


last pid:  7277;  load averages:  0.91,  0.96,  1.02  up 18+06:22:31
18:57:54

135 processes: 2 running, 132 sleeping, 1 waiting

Mem: 8020M Active, 19G Inact, 3537M Wired, 299M Cache, 1679M Buf, 38M Free
Swap: 512M Total, 501M Used, 12M Free, 97% Inuse

  PID USERNAMETHR PRI NICE   SIZERES STATE   C TIMEWCPU COMMAND
77941 pgsql 5  200  7921M  7295M usem7 404:32  10.25% 
postgres

79570 pgsql 1  200  7367M  6968M sbwait  6 4:24   0.59% postgres
73449 pgsql 1  270  7367M  6908M sbwait  4 8:08   5.08% postgres
74209 pgsql 1  270  7367M  6803M sbwait  0 6:37   1.46% postgres
74207 pgsql 1  250  7367M  6748M sbwait  6 6:34   1.76% postgres
74206 pgsql 1  220  7367M  6548M sbwait  5 6:44   1.86% postgres
73380 pgsql 1  230  7367M  7265M sbwait  4 8:15   1.17% postgres
74208 pgsql 1  240  7367M  7223M sbwait  1 6:30   4.59% postgres
79569 pgsql 1  240  7367M  7105M sbwait  3 4:36   1.17% postgres
74210 pgsql 1  290  7363M  7182M sbwait  5 6:41   5.47% postgres
73479 pgsql 1  220  7363M  6560M sbwait  6 7:14   3.56% postgres
83030 pgsql 1  200  7329M   193M sbwait  5 0:00   0.00% postgres
76178 pgsql 1  200  7323M  7245M sbwait  2 3:44   0.00% postgres
75867 pgsql 1  200  7323M  7245M sbwait  2 3:45   0.00% postgres
75869 pgsql 1  200  7323M  7245M sbwait  2 3:46   0.00% postgres
75883 pgsql 1  200  7323M  7245M sbwait  7 3:46   0.00% postgres
76180 pgsql   

Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Gerhard Wiesinger

On 13.12.2015 21:14, Bill Moran wrote:

Wait ... this is a combined HTTP/Postgres server? You didn't mention that
earlier, and it's kind of important.

What evidence do you have that Postgres is actually the part of
this system running out of memory?


For me the complete picture doesn't look consistent.


I don't see any such evidence in any of
your emails, and (based on experience) I find it pretty likely that whatever
is running under node is doing something in a horrifically memory-inefficient
manner. Since you mention that you see nothing in the PG logs, that makes it
even more likely (to me) that you're looking entirely in the wrong place.

I'd be willing to bet a steak dinner that if you put the web server on a
different server than the DB, that the memory problems would follow the
web server and not the DB server.


Changes in config:
track_activity_query_size = 102400
work_mem = 100MB

Ok, we restarted PostgreSQL and had it stopped for seconds, and logged 
top every second:


When PostgreSQL was down nearly all memory was freed, looks good to me. 
So it is likely that node and other processes are not the cause.

Mem: 742M Active, 358M Inact, 1420M Wired, 21G Cache, 871M Buf, 8110M Free
Swap: 512M Total, 477M Used, 35M Free, 93% Inuse

When PostgreSQL restarted, Inactive was growing fast (~1min):
Mem: 7998M Active, 18G Inact, 2763M Wired, 1766M Cache, 1889M Buf, 1041M 
Free

Swap: 512M Total, 472M Used, 41M Free, 92% Inuse

After some few minutes we are back again at the same situation:
Mem: 8073M Active, 20G Inact, 2527M Wired, 817M Cache, 1677M Buf, 268M Free
Swap: 512M Total, 472M Used, 41M Free, 92% Inuse

The steak dinner is mine :-) Donating to the PostgreSQL community :-)

Any further ideas, I don't think this is normal system behaviour.

Ciao,
Gerhard



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Bill Moran
On Sun, 13 Dec 2015 20:09:04 +0100
Gerhard Wiesinger  wrote:

> On 13.12.2015 18:17, Tom Lane wrote:
> > Gerhard Wiesinger  writes:
> >>> Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
> >>> Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse
> >> OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?
> > Just judging from the name of the function, I would bet this is a direct
> > result of having only 512M of swap configured.  As Bill already pointed
> > out, that's a pretty useless choice on a system with 32G of RAM.  As soon
> > as the kernel tries to push out any significant amount of idle processes,
> > it's gonna be out of swap space.  The numbers you show above prove that
> > it is almost out of free swap already.
> 
> The system wasn't designed by me, I wouldn't do it either that way. Does 
> swapoff help?

FreeBSD and Linux (and most modern OS) are designed to have swap,
and usually more swap than RAM. I have never heard a good reason for
not using swap, and the reasons I _have_ heard have always been by
people misinformed about how the OS works.

If someone has a _good_ explanation for why you wouldn't want any
swap on a DB server, I'd love to hear it; but everything I've heard
up till now has been speculation based on misinformation.

IOW: no, you should not turn swap off, you should instead allocate
the appropriate amount of swap space.

> > Also, while that 20G of "inactive" pages may be candidates for reuse,
> > they probably can't actually be reused without swapping them out ...
> > and there's noplace for that data to go.
> 
> There is no log in syslog (where postgres log) when 
> swap_pager_getswapspace is logged.
> 
> But why do we have 20G of Inactive pages? They are still allocated by 
> kernel or user space. As you can see below (top output) NON Postgres 
> processes are around 9G in virtual size, resident even lower. The system 
> is nearly idle, and the queries typically aren't active after one second 
> agin. Therefore where does the rest of the 11G of Inactive pages come 
> from (if it isn't a Postgres/FreeBSD memory leak)?
> I read that Postgres has it's own memory allocator:
> https://www.reddit.com/r/programming/comments/18zija/github_got_30_better_performance_using_tcmalloc/
> Might that be an issue with double allocation/freeing and the "cheese 
> hole" topic with memory fragmentation?

If there were a memory leak in either FreeBSD or Postgres of the
seriousness you're describing that were as easy to trigger as you
claim, I would expect the mailing lists and other support forums
to be exploding in panic. Notice that they are not. Also, I still
don't see _ANY_ evidence of a leak. I see evidence that something
is happening that is trying to allocate a LOT of RAM, that isn't
available on your system; but that's not the same as a leak.

> https://www.opennet.ru/base/dev/fbsdvm.txt.html
>  inactivepages not actively used by programs which are
>  dirty and (at some point) need to be written
>  to their backing store (typically disk).
>  These pages are still associated with objects and
>  can be reclaimed if a program references them.
>  Pages can be moved from the active to the inactive
>  queue at any time with little adverse effect.
>  Moving pages to the cache queue has bigger
>  consequences (note 1)

Correct, but, when under pressure, the system _will_ recycle those
pages to be available.

Tom might be correct in that the system thinks they are inactive
because it could easily push them out to swap, but then it can't
_actually_ do that because you haven't allocated enough swap, but
that doesn't match my understanding of how inactive is used. A
question of that detail would be better asked on a FreeBSD forum,
as the differences between different VM implementations can be
pretty specific and technical.

[snip]

> Mem: 8020M Active, 19G Inact, 3537M Wired, 299M Cache, 1679M Buf, 38M Free
> Swap: 512M Total, 501M Used, 12M Free, 97% Inuse
> 
>PID USERNAMETHR PRI NICE   SIZERES STATE   C TIMEWCPU COMMAND
> 77941 pgsql 5  200  7921M  7295M usem7 404:32  10.25% 
> postgres
> 79570 pgsql 1  200  7367M  6968M sbwait  6 4:24   0.59% postgres

[snip about 30 identical PG processes]

> 32387 myusername9  200   980M   375M uwait   5 69:03   1.27% node

[snip similar processes]

>622 myusername1  200   261M  3388K kqread  3 41:01   0.00% nginx

[snip similar processes]

Wait ... this is a combined HTTP/Postgres server? You didn't mention that
earlier, and it's kind of important.

What evidence do you have that Postgres is actually the part of
this system running out of memory? I don't see any such evidence in any of
your emails, and 

Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Bill Moran
On Sun, 13 Dec 2015 22:23:19 +0100
Gerhard Wiesinger  wrote:

> On 13.12.2015 21:14, Bill Moran wrote:
> > Wait ... this is a combined HTTP/Postgres server? You didn't mention that
> > earlier, and it's kind of important.
> >
> > What evidence do you have that Postgres is actually the part of
> > this system running out of memory?
> 
> For me the complete picture doesn't look consistent.

That's because you haven't gathered enough of the right type of information.

> > I don't see any such evidence in any of
> > your emails, and (based on experience) I find it pretty likely that whatever
> > is running under node is doing something in a horrifically 
> > memory-inefficient
> > manner. Since you mention that you see nothing in the PG logs, that makes it
> > even more likely (to me) that you're looking entirely in the wrong place.
> >
> > I'd be willing to bet a steak dinner that if you put the web server on a
> > different server than the DB, that the memory problems would follow the
> > web server and not the DB server.
> 
> Changes in config:
> track_activity_query_size = 102400
> work_mem = 100MB
> 
> Ok, we restarted PostgreSQL and had it stopped for seconds, and logged 
> top every second:
> 
> When PostgreSQL was down nearly all memory was freed, looks good to me. 
> So it is likely that node and other processes are not the cause.
> Mem: 742M Active, 358M Inact, 1420M Wired, 21G Cache, 871M Buf, 8110M Free
> Swap: 512M Total, 477M Used, 35M Free, 93% Inuse
> 
> When PostgreSQL restarted, Inactive was growing fast (~1min):
> Mem: 7998M Active, 18G Inact, 2763M Wired, 1766M Cache, 1889M Buf, 1041M 
> Free
> Swap: 512M Total, 472M Used, 41M Free, 92% Inuse
> 
> After some few minutes we are back again at the same situation:
> Mem: 8073M Active, 20G Inact, 2527M Wired, 817M Cache, 1677M Buf, 268M Free
> Swap: 512M Total, 472M Used, 41M Free, 92% Inuse
> 
> The steak dinner is mine :-) Donating to the PostgreSQL community :-)

Based on the fact that the inactive memory increased? Your understanding
of inactive memory in FreeBSD is incorrect. Those pages are probably DB
pages that the OS is keeping in inactive memory because Postgres requests
them over and over, which is what the OS is supposed to do to ensure the
best performance. Are you seeing any out of swap space errors? Even if
you are, you still haven't determined if the problem is the result of
Postgres or the node.js stuff you have running. I don't know what node.js
might be caching on the client side ... do you?

No. Until you can actually report back something other than wild
speculation, I'll keep that steak dinner for myself. Besides, that bet
was based on you putting the PG server on seperate hardware from the
web server, which you didn't do.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory for BYTEA returned by C function is not released until connection is dropped

2015-09-21 Thread Pavel Stehule
2015-09-21 4:31 GMT+02:00 John Leiseboer :

> I have written a number of functions in C that return BYTEA type. I have
> compiled and run on both Windows and Linux, 32-bit and 64-bit, PostgreSQL
> versions 9.3 and 9.4.
>
> My functions return BYTEA data to the caller. The problem is that memory
> usage grows until there is no memory left on the host, at which point an
> error is returned. If I drop the connection (e.g. by quitting from pqsql),
> the memory is returned.
>
> I wrote the following minimal function to test palloc() and BYTEA return
> behaviour, and found that this minimal program also exhibits the unbounded
> memory growth problem.
>
>
> C source code:
>
> PG_FUNCTION_INFO_V1(test_palloc);
> Datum test_palloc()
> {
> bytea *test_ret;
> int test_len = 1024;
>
> test_ret = (bytea *)palloc(test_len + VARHDRSZ);
> SET_VARSIZE(test_ret, test_len + VARHDRSZ);
> PG_RETURN_BYTEA_P(test_ret);
> }
>
> Function definition:
>
> CREATE OR REPLACE FUNCTION test_palloc() RETURNS BYTEA
> AS E'', test_palloc' LANGUAGE C IMMUTABLE STRICT;
>
> psql commands to reproduce the problem:
>
> \o out.txt
> SELECT ids.*, test_palloc() FROM GENERATE_SERIES(1, 100) ids;
>
> At the completion of the above command, host memory will have been
> consumed but not released back to the system. After quitting psql (\q),
> memory is released.
>
> Is this expected behaviour or a bug? Am I doing something wrong? How can I
> return a BYTEA type from a C library function without having to drop the
> connection in order to recover the allocated memory that is returned to the
> caller?
>
>
This memory is release, when memory context is dropped.

http://www.neilconway.org/talks/hacking/hack_slides.pdf look on slide 15

Regards

Pavel


> Regards,
> John
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Memory for BYTEA returned by C function is not released until connection is dropped

2015-09-20 Thread John Leiseboer
Tom Lane [mailto:t...@sss.pgh.pa.us] writes:
> But at any rate, bottom line is that your problem is client-side not 
> server-side, and no amount of fooling with the function innards will change 
> it.

I wish it were. While monitoring memory on Linux and Windows machines I see 
that psql memory usage hardly changes, but PostgreSQL server memory usage 
increases steadily until the query stops. PostgreSQL server memory usage stays 
high until after the client drops the connection. This is definitely a case of 
the server holding onto memory until the client drops the connection.

In other case, when I let the query continue until memory is exhausted, the 
PostgreSQL server crashes with "out of memory" error, not the client.

When does the PostgreSQL server call pfree() after a C function has returned to 
the caller? All I've found in books and Google searches is:

"What makes palloc() special is that it allocates the memory in the current 
context and the whole memory is freed in one go when the context is destroyed."

What "context"? The connection? The transaction? A SQL statement? The function 
call?

John


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory for BYTEA returned by C function is not released until connection is dropped

2015-09-20 Thread Tom Lane
John Leiseboer  writes:
> I have written a number of functions in C that return BYTEA type. I have 
> compiled and run on both Windows and Linux, 32-bit and 64-bit, PostgreSQL 
> versions 9.3 and 9.4.
> My functions return BYTEA data to the caller. The problem is that memory 
> usage grows until there is no memory left on the host, at which point an 
> error is returned. If I drop the connection (e.g. by quitting from pqsql), 
> the memory is returned.

> I wrote the following minimal function to test palloc() and BYTEA return 
> behaviour, and found that this minimal program also exhibits the unbounded 
> memory growth problem.

> C source code:

> PG_FUNCTION_INFO_V1(test_palloc);
> Datum test_palloc()
> {
>   bytea *test_ret;
>   int test_len = 1024;

>   test_ret = (bytea *)palloc(test_len + VARHDRSZ);
>   SET_VARSIZE(test_ret, test_len + VARHDRSZ);
>   PG_RETURN_BYTEA_P(test_ret);
> }

> Function definition:

> CREATE OR REPLACE FUNCTION test_palloc() RETURNS BYTEA AS E' library>', test_palloc' LANGUAGE C IMMUTABLE STRICT;

> psql commands to reproduce the problem:

> \o out.txt
> SELECT ids.*, test_palloc() FROM GENERATE_SERIES(1, 100) ids;

> At the completion of the above command, host memory will have been consumed 
> but not released back to the system. After quitting psql (\q), memory is 
> released.

Well, first off, it's not that function that is eating memory: if you've
got it marked as IMMUTABLE then it will in fact only be executed *once*
per query.  I don't see any evidence of memory leakage on the server
at all when trying a comparable query.  (Disclaimer: I just used
"repeat('x', 1024)" rather than bothering to compile up a .so.  But I
do not think it behaves differently.)

What I do see bloating is psql, which is absorbing a query result of
about 1GB (100 1kB-sized rows) and then spending a lot of cycles
to pretty-print that while writing it to out.txt.

On my Linux box, psql does release the memory back to the kernel when
that's over.  But that would depend on the behavior of libc, so it's
quite plausible that some other platforms would not.

The way to avoid this is to not ask the client program to absorb the whole
1GB-sized query result at once.  You could use a cursor and FETCH a few
thousand rows at a time.  (In reasonably recent versions of psql, "\set
FETCH_COUNT" can do that for you automatically, at the cost of possibly
less pretty output formatting.)  Or use COPY, which is implemented in more
of a streaming style to begin with.

But at any rate, bottom line is that your problem is client-side not
server-side, and no amount of fooling with the function innards will
change it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Utilization Issue

2015-05-20 Thread Mathew Moon
What is the output of 'free -m' ? Look at the third column second row (+/- 
cache). This is the most relevant number. Even if you are swapping that can 
happen with plenty of RAM available if 'swappiness' is set too low. 

Sent from my iPhone

 On May 20, 2015, at 2:25 AM, Sachin Srivastava ssr.teleat...@gmail.com 
 wrote:
 
 Dear Concern,
  
 Always my server memory utilization is remain 99%. I have 4 DB server and 
 RAM of the server is (32 GB, 64 GB, 64 GB and 132 GB). In every server always 
 we are getting the memory utilization  99%. Kindly suggest why this problem 
 is and which parameter will resolve this problem.
 
 I am using PostgreSQL 9.1.2 and PostGIS 1.5 version on every Red hat Linux 
 server. Kindly find the TOP result, ulimit -a  result, (kernel.shmax, 
 kernel.shmall, kernel.sem value) and pg_setting result of one server as 
 below. If you require any other information then inform to me.
  
  
 [root@CPPMOMA_DB01 ~]# top
 top - 12:08:08 up 15 days, 15:45,  5 users,  load average: 1.05, 1.19, 1.35
 Tasks: 605 total,   1 running, 604 sleeping,   0 stopped,   0 zombie
 Cpu(s):  6.9%us,  2.1%sy,  0.0%ni, 78.3%id, 12.7%wa,  0.0%hi,  0.1%si,  0.0%st
 Mem:  32832364k total, 32621168k used,   211196k free,77572k buffers
 
 
 [root@CPPMOMA_DB01 ~]# ulimit -a
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 256323
 max locked memory   (kbytes, -l) 64
 max memory size (kbytes, -m) unlimited
 open files  (-n) 4096
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 256323
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 [root@CPPMOMA_DB01 ~]#
 
 
 kernel.shmmax = 32212254720
 
 kernel.shmall = 1073741824
 
 kernel.sem = 250 32000 100 384
 
  
 postgres=# SELECT name, source, setting FROM pg_settings WHERE source != 
 'default' AND source != 'override' ORDER by 2, 1;
 
 name|source| setting
 +--+--
  application_name   | client   | psql.bin
  client_encoding| client   | UTF8
  archive_command| configuration file   | cp %p 
 /dbkup/momacpp_213_live/%f
  archive_mode   | configuration file   | on
  autovacuum | configuration file   | on
  autovacuum_max_workers | configuration file   | 3
  checkpoint_segments| configuration file   | 200
  checkpoint_timeout | configuration file   | 300
  checkpoint_warning | configuration file   | 30
  DateStyle  | configuration file   | ISO, MDY
  default_text_search_config | configuration file   | pg_catalog.english
  effective_cache_size   | configuration file   | 524288
  lc_messages| configuration file   | en_US.UTF-8
  lc_monetary| configuration file   | en_US.UTF-8
  lc_numeric | configuration file   | en_US.UTF-8
  lc_time| configuration file   | en_US.UTF-8
  listen_addresses   | configuration file   | *
  log_destination| configuration file   | stderr
  log_directory  | configuration file   | pg_log
  logging_collector  | configuration file   | on
  log_line_prefix| configuration file   | %t
  log_rotation_age   | configuration file   | 1440
  maintenance_work_mem   | configuration file   | 1638400
  max_connections| configuration file   | 2000
  max_files_per_process  | configuration file   | 2000
  max_wal_senders| configuration file   | 5
  port   | configuration file   | 5432
  shared_buffers | configuration file   | 1572864
  temp_buffers   | configuration file   | 4096
  wal_level  | configuration file   | archive
  work_mem   | configuration file   | 32768
  log_timezone   | environment variable | Asia/Kolkata
  max_stack_depth| environment variable | 2048
  TimeZone   | environment variable | Asia/Kolkata
 (34 rows)
  
 postgres=#
  
 
  
  
 Regards,
 Sachin Srivastava
 Assistant Technical Lead(Oracle/PostgreSQL)| TSG
 Cyient | www.cyient.com


Re: [GENERAL] Memory Utilization Issue

2015-05-20 Thread Naveed Shaikh
Could you also please check the Transparent huge page(THP) are enabled on
the server or not, they can also result in intermittent poor performance
along with high system cpu time counted against the database processes.

This can be confirmed by below command:

cat /sys/kernel/mm/redhat_transparent_hugepage/enabled

Here always is selected, showing THP are enabled, this needs to be
disable with following command:

echo never  /sys/kernel/mm/redhat_transparent_hugepage/enabled

Thanks  Regards,
Naveed Shaikh





On Wed, May 20, 2015 at 12:55 PM, Sachin Srivastava ssr.teleat...@gmail.com
 wrote:

 Dear Concern,


 Always my server memory utilization is remain 99%. I have 4 DB server and
 RAM of the server is (32 GB, 64 GB, 64 GB and 132 GB*). In every server
 always we are getting the memory utilization  99%. *Kindly suggest why
 this problem is and which parameter will resolve this problem.

 I am using PostgreSQL 9.1.2 and PostGIS 1.5 version on every Red hat Linux
 server. Kindly find the TOP result, ulimit -a  result,
 (kernel.shmax, kernel.shmall, kernel.sem value) and pg_setting result
 of one server as below. If you require any other information then inform to
 me.





 [root@CPPMOMA_DB01 ~]# top

 top - 12:08:08 up 15 days, 15:45,  5 users,  load average: 1.05, 1.19,
 1.35

 Tasks: 605 total,   1 running, 604 sleeping,   0 stopped,   0 zombie

 Cpu(s):  6.9%us,  2.1%sy,  0.0%ni, 78.3%id, 12.7%wa,  0.0%hi,  0.1%si,
 0.0%st
 Mem:  32832364k total, 32621168k used,   211196k free,77572k buffers


 [root@CPPMOMA_DB01 ~]# ulimit -a
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 256323
 max locked memory   (kbytes, -l) 64
 max memory size (kbytes, -m) unlimited
 open files  (-n) 4096
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 256323
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 [root@CPPMOMA_DB01 ~]#


 kernel.shmmax = 32212254720

 kernel.shmall = 1073741824

 kernel.sem = 250 32000 100 384


 postgres=# SELECT name, source, setting FROM pg_settings WHERE source !=
 'default' AND source != 'override' ORDER by 2, 1;

 name|source| setting


 +--+--

  application_name   | client   | psql.bin

  client_encoding| client   | UTF8

  archive_command| configuration file   | cp %p
 /dbkup/momacpp_213_live/%f

  archive_mode   | configuration file   | on

  autovacuum | configuration file   | on

  autovacuum_max_workers | configuration file   | 3

  checkpoint_segments| configuration file   | 200

  checkpoint_timeout | configuration file   | 300

  checkpoint_warning | configuration file   | 30

  DateStyle  | configuration file   | ISO, MDY

  default_text_search_config | configuration file   | pg_catalog.english

  effective_cache_size   | configuration file   | 524288

  lc_messages| configuration file   | en_US.UTF-8

  lc_monetary| configuration file   | en_US.UTF-8

  lc_numeric | configuration file   | en_US.UTF-8

  lc_time| configuration file   | en_US.UTF-8

  listen_addresses   | configuration file   | *

  log_destination| configuration file   | stderr

  log_directory  | configuration file   | pg_log

  logging_collector  | configuration file   | on

  log_line_prefix| configuration file   | %t

  log_rotation_age   | configuration file   | 1440

  maintenance_work_mem   | configuration file   | 1638400

  max_connections| configuration file   | 2000

  max_files_per_process  | configuration file   | 2000

  max_wal_senders| configuration file   | 5

  port   | configuration file   | 5432

  shared_buffers | configuration file   | 1572864

  temp_buffers   | configuration file   | 4096

  wal_level  | configuration file   | archive

  work_mem   | configuration file   | 32768

  log_timezone   | environment variable | Asia/Kolkata

  max_stack_depth| environment variable | 2048

  TimeZone   | environment variable | Asia/Kolkata

 (34 rows)



 postgres=#






 Regards,

 *Sachin Srivastava*
 Assistant Technical Lead(Oracle/PostgreSQL)| TSG
 *Cyient* | www.cyient.com



Re: [GENERAL] Memory Utilization Issue

2015-05-20 Thread Albe Laurenz
Sachin Srivastava wrote:
 Always my server memory utilization is remain 99%. I have 4 DB server and 
 RAM of the server is (32
 GB, 64 GB, 64 GB and 132 GB). In every server always we are getting the 
 memory utilization  99%.
 Kindly suggest why this problem is and which parameter will resolve this 
 problem.
 
 I am using PostgreSQL 9.1.2 and PostGIS 1.5 version on every Red hat Linux 
 server. Kindly find the
 TOP result, ulimit -a  result, (kernel.shmax, kernel.shmall, 
 kernel.sem value) and
 pg_setting result of one server as below. If you require any other 
 information then inform to me.

That sounds just fine.
Linux uses memory for the file system cache.  That memory is shown as in use, 
but
it is available for processes if they need it.

It doesn't look like your machine is swapping.

Do you experience problems?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] memory leak while trying to update/alter column in postgresql

2014-07-05 Thread Michael Paquier
On Sat, Jul 5, 2014 at 11:06 AM, Madhurima Das madhurima@gmail.com wrote:
 int main()
 {
 PGconn *conn;
PGresult *res;
 int i=0,nFields=0,row=0,col=0;

 conn = PQconnectdb(dbname=test1 host=localhost user=postgres
 password=yyy);
if(PQstatus(conn) == CONNECTION_BAD)
{
 fprintf(stderr, Connection to database \%s\ failed.\n,
 PQerrorMessage(conn));
 fprintf(stderr, %s, PQerrorMessage(conn));
 exit_nicely(conn);
 }

 res = PQexec(conn, IF COL_LENGTH('xxx','comment') IS NULL);
Call to PQclear is missing here or before the next call to PQexec.

 if(res)
 res = PQexec(conn, ALTER TABLE xxx ADD comment VARCHAR(500));
else
 res = PQexec(conn, UPDATE TABLE xxx ADD comment VARCHAR(500));
Same here.

 res = PQexec(conn, IF COL_LENGTH('xxx','id') IS NULL);
Same here.

 if(res)
 res = PQexec(conn, ALTER TABLE xxx ADD id VARCHAR(50));
else
 res = PQexec(conn, UPDATE TABLE xxx ADD id VARCHAR(50));
Same here.

   res = PQexec(conn, SELECT * FROM xxx);
 if((!res) || (PQresultStatus(res) != PGRES_TUPLES_OK))
 {
 fprintf(stderr, SELECT command did not return tuples properly\n);
 PQclear(res);
 }

 PQclear(res);
 PQfinish(conn);

 return 0;
 }
Btw, there is a lot of code duplication...
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory leak with CREATE TEMP TABLE ON COMMIT DROP?

2014-06-12 Thread Tom Lane
Eric Ridge e_ri...@tcdi.com writes:
 As best I can guess, Postgres has some kind of memory leak around (at least) 
 temporary tables flagged to drop on commit.  It's fairly easy to reproduce:

I don't see any memory leak with this example.

What I do see is the process's use of shared memory grows slowly until it
reaches 128MB (or whatever you've set shared_buffers to), and then stops.
This is normal behavior and has nothing to do with any memory leak: what
causes that is that top and related tools typically don't count shared
memory pages in a process's memory footprint until the process has
actually touched those pages.  So as the process gradually touches more
and more of the shared buffers, its memory usage appears to climb, but
that's completely an illusion of the viewing tool.  The important point
is the process's private memory usage, and that's not growing at all
AFAICS --- it's steady at a small number of megabytes.

Since you say you had an out-of-memory failure, you may well have had
some kind of leak in your original situation, but this test case isn't
reproducing it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] memory leaking ?

2014-05-28 Thread Tom Lane
Zhemin Zhou zhemin.z...@warwick.ac.uk writes:
 We met a problem after running the website for one week. We used a 
 function to convert and save binary files into the database (as bytea). 
 This function worked well in the old version but sometimes makes the new 
 version of postgres crash. This random crash is not file specific. 

AFAICT, it's pure luck that it didn't crash the older system too.  You're
allocating the output buffer too small, at least for cases where size
isn't a multiple of 3:

  bytea *result = (bytea *) 
 palloc(VARHDRSZ+sizeof(char)*(4*(size)/3+15));

For example, if size = 2, 4*2/3 is only 2, but the loop will write 4 bytes
of data.  So the function sometimes clobbers bytes beyond what it
allocated, which unsurprisingly corrupts malloc's data structures.
You need to round up not truncate in this division.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per postmaster process

2013-11-03 Thread Grzegorz Tańczyk

On 11/02/2013 08:47 PM, Tom Lane wrote:

Author: Tom Lanet...@sss.pgh.pa.us
Branch: master Release: REL9_1_BR [3e5f9412d] 2010-10-06 19:31:05 -0400

 Reduce the memory requirement for large ispell dictionaries.

I checked this patch:
http://www.postgresql.org/message-id/AANLkTi=4fUi1zoFMpZ==yf14rjdv_g1xgakvqmdye...@mail.gmail.com

I can't find it here:
http://doxygen.postgresql.org/spell_8c_source.html

I also don't see those changes in 9.3.1 source. Status in commitfest 
list is  Committed. I can't see hold_memory anywhere.



If you're not using ispell, it's not relevant, and I'm not sure whether
the savings were significant for anything but Czech.

I am using ispell.

Thanks

--
Regards,
  Grzegorz


Re: [GENERAL] Memory usage per postmaster process

2013-11-03 Thread Tom Lane
=?UTF-8?B?R3J6ZWdvcnogVGHFhGN6eWs=?= golia...@polzone.pl writes:
 On 11/02/2013 08:47 PM, Tom Lane wrote:
 Author: Tom Lanet...@sss.pgh.pa.us
 Branch: master Release: REL9_1_BR [3e5f9412d] 2010-10-06 19:31:05 -0400
 
 Reduce the memory requirement for large ispell dictionaries.
 I checked this patch:
 http://www.postgresql.org/message-id/AANLkTi=4fUi1zoFMpZ==yf14rjdv_g1xgakvqmdye...@mail.gmail.com

 I can't find it here:
 http://doxygen.postgresql.org/spell_8c_source.html

 I also don't see those changes in 9.3.1 source. Status in commitfest 
 list is  Committed. I can't see hold_memory anywhere.

If you read the rest of the discussion of the patch, you'd find out that
what got committed was not all that much like Pavel's original.  But
it has the same effect.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per postmaster process

2013-11-02 Thread John R Pierce

On 11/2/2013 11:03 AM, Grzegorz Tańczyk wrote:


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool. 


the size of your connection pool shouldn't be much more than 2-3 times 
the CPU core count on the server for optimal throughput... 100 queries 
running at once will grind ANY non-monster server to a standstill





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [100% SPAM] Re: [GENERAL] Memory usage per postmaster process

2013-11-02 Thread Grzegorz Tańczyk

On 11/02/2013 07:47 PM, John R Pierce wrote:

On 11/2/2013 11:03 AM, Grzegorz Tańczyk wrote:


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool. 


the size of your connection pool shouldn't be much more than 2-3 times 
the CPU core count on the server for optimal throughput... 100 queries 
running at once will grind ANY non-monster server to a standstill
In fact thats what happened when tsearch2 problem occured even though 
there was only few queries running at once. Group of idle connections 
was using resources and that's the thing I don't understand.
Did tsearch2 dictionary caching implementation improve after 8.3 on this 
matter?


Making small connection pool will help, however how small should it be? 
1 connection max, 0 connections minimum? Connections will get closed 
after they are released by application code, but still there will be 
some group of postmaster processes and how can I be sure if none of them 
will get 1gb of system memory? I can't have any control over this (other 
than grepping ps output and manually pg_cancel_backend them once they 
grow too much).


Thanks!

--
Regards,
  Grzegorz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [100% SPAM] Re: [GENERAL] Memory usage per postmaster process

2013-11-02 Thread Tom Lane
=?UTF-8?B?R3J6ZWdvcnogVGHFhGN6eWs=?= golia...@polzone.pl writes:
 Did tsearch2 dictionary caching implementation improve after 8.3 on this 
 matter?

Well, there was this:

Author: Tom Lane t...@sss.pgh.pa.us
Branch: master Release: REL9_1_BR [3e5f9412d] 2010-10-06 19:31:05 -0400

Reduce the memory requirement for large ispell dictionaries.

This patch eliminates per-chunk palloc overhead for most small allocations
needed in the representation of an ispell dictionary.  This saves close to
a factor of 2 on the current Czech ispell data.  While it doesn't cover
every last small allocation in the ispell code, we are at the point of
diminishing returns, because about 95% of the allocations are covered
already.

Pavel Stehule, rather heavily revised by Tom

If you're not using ispell, it's not relevant, and I'm not sure whether
the savings were significant for anything but Czech.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Issue with array_agg?

2013-08-21 Thread Robert Sosinski
Hi Pavel,

Here are the explains you asked for:

explain analyze select string_agg(id::text,',') from things group by guid;
QUERY PLAN

--
 GroupAggregate  (cost=400357.78..433784.93 rows=1337086 width=37) (actual
time=41434.485..53195.185 rows=2378626 loops=1)
   -  Sort  (cost=400357.78..403700.49 rows=1337086 width=37) (actual
time=41434.433..44992.736 rows=2378626 loops=1)
 Sort Key: guid
 Sort Method: quicksort  Memory: 284135kB
 -  Seq Scan on things  (cost=0.00..264304.86 rows=1337086
width=37) (actual time=0.027..21429.179 rows=2378626 loops=1)
 Total runtime: 56295.362 ms
(6 rows)


explain analyze select array_agg(id::text) from things group by guid;
   QUERY PLAN

-
 GroupAggregate  (cost=400357.78..433784.93 rows=1337086 width=37) (actual
time=23953.922..38157.059 rows=2378626 loops=1)
   -  Sort  (cost=400357.78..403700.49 rows=1337086 width=37) (actual
time=23953.847..27527.316 rows=2378626 loops=1)
 Sort Key: guid
 Sort Method: quicksort  Memory: 284135kB
 -  Seq Scan on things  (cost=0.00..264304.86 rows=1337086
width=37) (actual time=0.007..4941.752 rows=2378626 loops=1)
 Total runtime: 41280.897 ms
(6 rows)



These seem to be running on the machine now, and the memory is not
inflating, I just run this one, and it blew up.

explain with t as (select id, guid, md5(concat_ws(':', fields - 'a',
fields - 'b', fields - 'c', fields - 'd', fields - 'e', foo_id::text))
from things) select md5, count(id), array_agg(id) from t group by 1 having
count(id)  1;

-Robert


On Tue, Aug 20, 2013 at 1:53 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Can you send a EXPLAIN result in both use cases?

 Pavel


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 At the moment, all guids are distinct, however before I zapped the
 duplicates, there were 280 duplicates.

 Currently, there are over 2 million distinct guids.

 -Robert


 On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:




 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I
 have in the table, but I can give you an example query and the schema
 attached below.  From there, I would just put in 2 million rows worth 1.2
 Gigs of data.  Average size of the the extended columns (using the
 pg_column_size function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


 how much distinct guid is there, and how much duplicates

 ??

 regards

 Pavel




 example schema:
  Table public.things

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
  | plain|  |
  created_at | timestamp without time zone | not null
  | plain|  |
  updated_at | timestamp without time zone | not null
  | plain|  |
  foo_id | integer | not null
  | plain|  |
  bar_id | integer | not null
  | plain|  |
  baz_id | integer | not null
  | plain|  |
  guid   | character varying   | not null
  | extended |  |
  name   | character varying   | not null
  | extended |  |
  price  | numeric(12,2)   | not null
  | main |  |
  currency   | character varying   | not null
  | extended |  |
  amount | integer | not null
  | plain|  |
  the_date   | date| not null
  | plain|  |
  fields | hstore  |
 | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree 

Re: [GENERAL] Memory Issue with array_agg?

2013-08-20 Thread Pavel Stehule
Hello

It is strange. I am trying to simulate it without success. On 1 M rows
where every id is 2 times duplicated

processing string_agg .. cca 30MB

processing array_agg cca 32MB

postgres=# create table foo(a int, b varchar);
CREATE TABLE
postgres=# insert into foo select i, md5(i::text) from
generate_series(1,100) g(i);
INSERT 0 100
postgres=# insert into foo select i, md5(i::text) from
generate_series(1,100) g(i);
INSERT 0 100
postgres=# CREATE INDEX on foo(b);
CREATE INDEX
postgres=# ANALYZE foo;
ANALYZE
postgres=# explain analyze select string_agg(a::text,',') from foo group by
b;
 QUERY
PLAN

 GroupAggregate  (cost=410045.19..447831.37 rows=1022895 width=37) (actual
time=10195.972..14993.493 rows=100 loops=1)
   -  Sort  (cost=410045.19..415045.19 rows=200 width=37) (actual
time=10195.944..13659.985 rows=200 loops=1)
 Sort Key: b
 Sort Method: external merge  Disk: 97768kB
 -  Seq Scan on foo  (cost=0.00..36667.00 rows=200 width=37)
(actual time=0.018..321.197 rows=200 loops=1)
 Total runtime: 15066.397 ms
(6 rows)

postgres=# explain analyze select array_agg(a::text) from foo group by b;
 QUERY
PLAN

 GroupAggregate  (cost=410045.19..447831.37 rows=1022895 width=37) (actual
time=10062.095..15697.755 rows=100 loops=1)
   -  Sort  (cost=410045.19..415045.19 rows=200 width=37) (actual
time=10062.059..13613.300 rows=200 loops=1)
 Sort Key: b
 Sort Method: external merge  Disk: 97768kB
 -  Seq Scan on foo  (cost=0.00..36667.00 rows=200 width=37)
(actual time=0.029..311.423 rows=200 loops=1)
 Total runtime: 15799.226 ms
(6 rows)

Regards

Pavel


2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 At the moment, all guids are distinct, however before I zapped the
 duplicates, there were 280 duplicates.

 Currently, there are over 2 million distinct guids.

 -Robert


 On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:




 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I
 have in the table, but I can give you an example query and the schema
 attached below.  From there, I would just put in 2 million rows worth 1.2
 Gigs of data.  Average size of the the extended columns (using the
 pg_column_size function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


 how much distinct guid is there, and how much duplicates

 ??

 regards

 Pavel




 example schema:
  Table public.things

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
| plain|  |
  created_at | timestamp without time zone | not null
| plain|  |
  updated_at | timestamp without time zone | not null
| plain|  |
  foo_id | integer | not null
| plain|  |
  bar_id | integer | not null
| plain|  |
  baz_id | integer | not null
| plain|  |
  guid   | character varying   | not null
| extended |  |
  name   | character varying   | not null
| extended |  |
  price  | numeric(12,2)   | not null
| main |  |
  currency   | character varying   | not null
| extended |  |
  amount | integer | not null
| plain|  |
  the_date   | date| not null
| plain|  |
  fields | hstore  |
 | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree (bar_id)
 things_baz_id_idx btree (baz_id)
   

Re: [GENERAL] Memory Issue with array_agg?

2013-08-20 Thread Pavel Stehule
Can you send a EXPLAIN result in both use cases?

Pavel


2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 At the moment, all guids are distinct, however before I zapped the
 duplicates, there were 280 duplicates.

 Currently, there are over 2 million distinct guids.

 -Robert


 On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:




 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I
 have in the table, but I can give you an example query and the schema
 attached below.  From there, I would just put in 2 million rows worth 1.2
 Gigs of data.  Average size of the the extended columns (using the
 pg_column_size function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


 how much distinct guid is there, and how much duplicates

 ??

 regards

 Pavel




 example schema:
  Table public.things

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
| plain|  |
  created_at | timestamp without time zone | not null
| plain|  |
  updated_at | timestamp without time zone | not null
| plain|  |
  foo_id | integer | not null
| plain|  |
  bar_id | integer | not null
| plain|  |
  baz_id | integer | not null
| plain|  |
  guid   | character varying   | not null
| extended |  |
  name   | character varying   | not null
| extended |  |
  price  | numeric(12,2)   | not null
| main |  |
  currency   | character varying   | not null
| extended |  |
  amount | integer | not null
| plain|  |
  the_date   | date| not null
| plain|  |
  fields | hstore  |
 | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree (bar_id)
 things_baz_id_idx btree (baz_id)
 things_guid_uidx UNIQUE, btree (guid)
 things_lpad_lower_name_eidx btree (lpad(lower(name::text), 10,
 '0'::text))
 things_price_idx btree (price)

 Foreign-key constraints:
 things_foo_id_fkey FOREIGN KEY (foo_id) REFERENCES foos(id)
 things_bar_id_fkey FOREIGN KEY (bar_id) REFERENCES bars(id)
 things_baz_id_fkey FOREIGN KEY (baz_id) REFERENCES bazs(id)
 Triggers:
 timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW
 EXECUTE PROCEDURE timestamps_tfun()

 Let me know if you need anything else.

 Thanks,


 On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:

 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has
 commanded this server process to roll back the current transaction and
 exit, because another server process exited abnormally and possibly
 corrupted shared memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be
 able to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed
 the query to use string_agg, it worked fine.  I also tried using array_agg
 on a few different queries, all yielding the same issue.  Swapping in
 string_agg fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and
 when I ran the query while viewing htop, the virtual size of the
 Postgres process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be 

Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Pavel Stehule
Hello

please, can you send some example or test?

Regards

Pavel Stehule


2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
 this server process to roll back the current transaction and exit, because
 another server process exited abnormally and possibly corrupted shared
 memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able
 to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed the
 query to use string_agg, it worked fine.  I also tried using array_agg on a
 few different queries, all yielding the same issue.  Swapping in string_agg
 fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and when I
 ran the query while viewing htop, the virtual size of the Postgres
 process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be much appreciated, thanks!

 -Robert



Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Robert Sosinski
Hi Pavel,

What kind of example do you need?  I cant give you the actual data I have
in the table, but I can give you an example query and the schema attached
below.  From there, I would just put in 2 million rows worth 1.2 Gigs of
data.  Average size of the the extended columns (using the pg_column_size
function) in bytes are:

guid: 33
name: 2.41
currency: 4
fields: 120.32

example query:

-- find duplicate records using a guid
select guid, array_agg(id) from orders group by guid;

example schema:
 Table public.things

   Column   |Type |  Modifiers
 | Storage  | Stats target | Description
+-+-+--+--+-
 id | integer | not null default
nextval('things_id_seq'::regclass) | plain|  |
 version| integer | not null
 | plain|  |
 created_at | timestamp without time zone | not null
 | plain|  |
 updated_at | timestamp without time zone | not null
 | plain|  |
 foo_id | integer | not null
 | plain|  |
 bar_id | integer | not null
 | plain|  |
 baz_id | integer | not null
 | plain|  |
 guid   | character varying   | not null
 | extended |  |
 name   | character varying   | not null
 | extended |  |
 price  | numeric(12,2)   | not null
 | main |  |
 currency   | character varying   | not null
 | extended |  |
 amount | integer | not null
 | plain|  |
 the_date   | date| not null
 | plain|  |
 fields | hstore  |
| extended |  |
Indexes:
things_pkey PRIMARY KEY, btree (id)
things_foo_id_idx btree (foo_id)
things_bar_id_idx btree (bar_id)
things_baz_id_idx btree (baz_id)
things_guid_uidx UNIQUE, btree (guid)
things_lpad_lower_name_eidx btree (lpad(lower(name::text), 10,
'0'::text))
things_price_idx btree (price)

Foreign-key constraints:
things_foo_id_fkey FOREIGN KEY (foo_id) REFERENCES foos(id)
things_bar_id_fkey FOREIGN KEY (bar_id) REFERENCES bars(id)
things_baz_id_fkey FOREIGN KEY (baz_id) REFERENCES bazs(id)
Triggers:
timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW EXECUTE
PROCEDURE timestamps_tfun()

Let me know if you need anything else.

Thanks,


On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
 this server process to roll back the current transaction and exit, because
 another server process exited abnormally and possibly corrupted shared
 memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able
 to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed
 the query to use string_agg, it worked fine.  I also tried using array_agg
 on a few different queries, all yielding the same issue.  Swapping in
 string_agg fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and when I
 ran the query while viewing htop, the virtual size of the Postgres
 process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be much appreciated, thanks!

 -Robert





Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Pavel Stehule
2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I have
 in the table, but I can give you an example query and the schema attached
 below.  From there, I would just put in 2 million rows worth 1.2 Gigs of
 data.  Average size of the the extended columns (using the pg_column_size
 function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


how much distinct guid is there, and how much duplicates

??

regards

Pavel




 example schema:
  Table public.things

Column   |Type |  Modifiers
  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
  | plain|  |
  created_at | timestamp without time zone | not null
  | plain|  |
  updated_at | timestamp without time zone | not null
  | plain|  |
  foo_id | integer | not null
  | plain|  |
  bar_id | integer | not null
  | plain|  |
  baz_id | integer | not null
  | plain|  |
  guid   | character varying   | not null
  | extended |  |
  name   | character varying   | not null
  | extended |  |
  price  | numeric(12,2)   | not null
  | main |  |
  currency   | character varying   | not null
  | extended |  |
  amount | integer | not null
  | plain|  |
  the_date   | date| not null
  | plain|  |
  fields | hstore  |
   | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree (bar_id)
 things_baz_id_idx btree (baz_id)
 things_guid_uidx UNIQUE, btree (guid)
 things_lpad_lower_name_eidx btree (lpad(lower(name::text), 10,
 '0'::text))
 things_price_idx btree (price)

 Foreign-key constraints:
 things_foo_id_fkey FOREIGN KEY (foo_id) REFERENCES foos(id)
 things_bar_id_fkey FOREIGN KEY (bar_id) REFERENCES bars(id)
 things_baz_id_fkey FOREIGN KEY (baz_id) REFERENCES bazs(id)
 Triggers:
 timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW EXECUTE
 PROCEDURE timestamps_tfun()

 Let me know if you need anything else.

 Thanks,


 On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
 this server process to roll back the current transaction and exit, because
 another server process exited abnormally and possibly corrupted shared
 memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able
 to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed
 the query to use string_agg, it worked fine.  I also tried using array_agg
 on a few different queries, all yielding the same issue.  Swapping in
 string_agg fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and when
 I ran the query while viewing htop, the virtual size of the Postgres
 process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be much appreciated, thanks!

 -Robert






Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Robert Sosinski
At the moment, all guids are distinct, however before I zapped the
duplicates, there were 280 duplicates.

Currently, there are over 2 million distinct guids.

-Robert


On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule pavel.steh...@gmail.comwrote:




 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I have
 in the table, but I can give you an example query and the schema attached
 below.  From there, I would just put in 2 million rows worth 1.2 Gigs of
 data.  Average size of the the extended columns (using the pg_column_size
 function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


 how much distinct guid is there, and how much duplicates

 ??

 regards

 Pavel




 example schema:
  Table public.things

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
| plain|  |
  created_at | timestamp without time zone | not null
| plain|  |
  updated_at | timestamp without time zone | not null
| plain|  |
  foo_id | integer | not null
| plain|  |
  bar_id | integer | not null
| plain|  |
  baz_id | integer | not null
| plain|  |
  guid   | character varying   | not null
| extended |  |
  name   | character varying   | not null
| extended |  |
  price  | numeric(12,2)   | not null
| main |  |
  currency   | character varying   | not null
| extended |  |
  amount | integer | not null
| plain|  |
  the_date   | date| not null
| plain|  |
  fields | hstore  |
   | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree (bar_id)
 things_baz_id_idx btree (baz_id)
 things_guid_uidx UNIQUE, btree (guid)
 things_lpad_lower_name_eidx btree (lpad(lower(name::text), 10,
 '0'::text))
 things_price_idx btree (price)

 Foreign-key constraints:
 things_foo_id_fkey FOREIGN KEY (foo_id) REFERENCES foos(id)
 things_bar_id_fkey FOREIGN KEY (bar_id) REFERENCES bars(id)
 things_baz_id_fkey FOREIGN KEY (baz_id) REFERENCES bazs(id)
 Triggers:
 timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW
 EXECUTE PROCEDURE timestamps_tfun()

 Let me know if you need anything else.

 Thanks,


 On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:

 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has
 commanded this server process to roll back the current transaction and
 exit, because another server process exited abnormally and possibly
 corrupted shared memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be
 able to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed
 the query to use string_agg, it worked fine.  I also tried using array_agg
 on a few different queries, all yielding the same issue.  Swapping in
 string_agg fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and when
 I ran the query while viewing htop, the virtual size of the Postgres
 process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be much appreciated, thanks!

 -Robert







Re: [GENERAL] Memory Issue with array_agg?

2013-08-18 Thread Christoph Moench-Tegeder
## Robert Sosinski (rsosin...@ticketevolution.com):

 When using array_agg on a large table, memory usage seems to spike up until
 Postgres crashes with the following error:

This sounds like bug #7916.
http://www.postgresql.org/message-id/e1uceeu-0004hy...@wrigleys.postgresql.org
As noted in that thread, changing the AllocSet parameters for
acummArrayResult may help (call to AllocSetContextCreate() in
ArrayBuildState *accumArrayResult(), src/backend/utils/adt/arrayfuncs.c,
change ALLOCSET_DEFAULT_*SIZE to ALLOCSET_SMALL_*SIZE).
Also, lowering work_mem may help, depending on your SQL.

Regards,
Christoph

-- 
Spare Space


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-07-03 Thread Daniel Cristian Cruz
I found where the problem is:

http://www.postgresql.org/docs/9.1/static/release-9-1-6.html

It could be nice if something is added to the 9.2 release notes to warn the
admins.


2013/4/24 Adrian Klaver adrian.kla...@gmail.com

 On 04/24/2013 11:20 AM, Daniel Cristian Cruz wrote:

 I've done an explain analyze under the test environment, and there is no
 aggressive memory usage.

 So I dropped the database in the new cluster and restored a fresh dump
 from production (in theory, that's the difference between the two
 environments).

 Some minutes after I got an answer: after a dump / restore, there is no
 problem. The same cluster just a dump/restore.

 Since I had no idea on what is the problem, and the structure from the
 dump of the bad database is equal to the dump from the production (it
 differs only in check constraints where ((turma.situacao)::text = ANY
 ((ARRAY['Aguardando Atualização'::character varying, 'Em
 Andamento'::character varying])::text[]))) became
 ((turma.situacao)::text = ANY (ARRAY[('Aguardando
 Atualização'::character varying)::text, ('Em Andamento'::character
 varying)::text], how can I report a issue in pg_upgrade?


 The bug reporting page is here:

 http://www.postgresql.org/**support/submitbug/http://www.postgresql.org/support/submitbug/



 Thanks, Adrian, Tomas and Tom. It's a specific issue, it's on backup
 until May, 1st. Until there, if someone would like to know something
 about it, just ask me, I would like to help removing an issue.




 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
 adrian.kla...@gmail.com




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-24 Thread Daniel Cristian Cruz
I've done an explain analyze under the test environment, and there is no
aggressive memory usage.

So I dropped the database in the new cluster and restored a fresh dump from
production (in theory, that's the difference between the two environments).

Some minutes after I got an answer: after a dump / restore, there is no
problem. The same cluster just a dump/restore.

Since I had no idea on what is the problem, and the structure from the dump
of the bad database is equal to the dump from the production (it differs
only in check constraints where ((turma.situacao)::text = ANY
((ARRAY['Aguardando Atualização'::character varying, 'Em
Andamento'::character varying])::text[]))) became ((turma.situacao)::text
= ANY (ARRAY[('Aguardando Atualização'::character varying)::text, ('Em
Andamento'::character varying)::text], how can I report a issue in
pg_upgrade?

Thanks, Adrian, Tomas and Tom. It's a specific issue, it's on backup until
May, 1st. Until there, if someone would like to know something about it,
just ask me, I would like to help removing an issue.


2013/4/23 Daniel Cristian Cruz danielcrist...@gmail.com

 2013/4/23 Adrian Klaver adrian.kla...@gmail.com

 On 04/23/2013 04:23 AM, Daniel Cristian Cruz wrote:

 2013/4/22 Daniel Cristian Cruz danielcrist...@gmail.com
 mailto:danielcristian@gmail.**com danielcrist...@gmail.com


 query1:
 EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino,
 ced.evento, ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo,
 ced.id_tipo_evento, ced.tipo_evento, ac.media_referencia, p.nome,
 ef.nome AS nomeEspacoFisico, ( SELECT count ( pre2.presente )  0
 FROM turma.presenca pre2 WHERE pre2.id_aula = ac.id_aula AND
 pre2.id_evento = ac.id_evento AND pre2.id_diario = '64469' ) AS
 presenca, ced.id_aula FROM recurso.consulta_evento_diario ced LEFT
 JOIN recurso.evento e USING ( id_evento ) LEFT JOIN
 recurso.espaco_fisico ef USING ( id_espaco_fisico ) LEFT JOIN
 turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
 recurso.evento_participante ep USING ( id_evento ) LEFT JOIN
 senai.pessoa p USING ( id_pessoa ) WHERE id_diario = '64469' AND
 ced.id_evento NOT IN ( SELECT ec.id_evento_sobreposto FROM
 recurso.evento_conflito ec WHERE ec.id_evento_sobreposto =
 ced.id_evento AND ec.ignorado IS NULL ) AND ced.inicio BETWEEN
 '2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99' ORDER BY
 inicio;

 server 9.1:
 http://explain.depesz.com/s/**fmM http://explain.depesz.com/s/fmM

 server 9.2:
 http://explain.depesz.com/s/**wXm http://explain.depesz.com/s/wXm

 After run this one on server 9.2, RES memory reached 6.5GB, VIRT
 15GB.


 Since there is no response, is this memory usage normal? The same query
 on version 9.1 doesn't use that much memory.


 Not sure how it applies but I noticed that a GroupAggregate in 9.1 that
 took 1.22 secs became a a HashAggregate in the 9.2 query and took 12.54
 secs.


 I used to read a explain and find something, but this one is huge.
 Unfortunately I'm still working on data migration from the 9.2 to 9.1 and
 didn't get time to read it in detail...

 I'm concerned about this because there is just only one report like
 that. Does someone else has the same pattern when using inherited tables?


 Also noticed that in your 9.2 production conf:

 (no constraint_exclusion set)

 Does this mean the default of 'partition' was left as is or that the
 setting was set to 'off'?


 No, default:

 senai=# show constraint_exclusion ;
  constraint_exclusion
 --
  partition
 (1 row)

 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-24 Thread Adrian Klaver

On 04/24/2013 11:20 AM, Daniel Cristian Cruz wrote:

I've done an explain analyze under the test environment, and there is no
aggressive memory usage.

So I dropped the database in the new cluster and restored a fresh dump
from production (in theory, that's the difference between the two
environments).

Some minutes after I got an answer: after a dump / restore, there is no
problem. The same cluster just a dump/restore.

Since I had no idea on what is the problem, and the structure from the
dump of the bad database is equal to the dump from the production (it
differs only in check constraints where ((turma.situacao)::text = ANY
((ARRAY['Aguardando Atualização'::character varying, 'Em
Andamento'::character varying])::text[]))) became
((turma.situacao)::text = ANY (ARRAY[('Aguardando
Atualização'::character varying)::text, ('Em Andamento'::character
varying)::text], how can I report a issue in pg_upgrade?


The bug reporting page is here:

http://www.postgresql.org/support/submitbug/



Thanks, Adrian, Tomas and Tom. It's a specific issue, it's on backup
until May, 1st. Until there, if someone would like to know something
about it, just ask me, I would like to help removing an issue.






--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-23 Thread Daniel Cristian Cruz
2013/4/22 Daniel Cristian Cruz danielcrist...@gmail.com

 query1:
 EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino, ced.evento,
 ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo, ced.id_tipo_evento,
 ced.tipo_evento, ac.media_referencia, p.nome, ef.nome AS nomeEspacoFisico,
 ( SELECT count ( pre2.presente )  0 FROM turma.presenca pre2 WHERE
 pre2.id_aula = ac.id_aula AND pre2.id_evento = ac.id_evento AND
 pre2.id_diario = '64469' ) AS presenca, ced.id_aula FROM
 recurso.consulta_evento_diario ced LEFT JOIN recurso.evento e USING (
 id_evento ) LEFT JOIN recurso.espaco_fisico ef USING ( id_espaco_fisico )
 LEFT JOIN turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
 recurso.evento_participante ep USING ( id_evento ) LEFT JOIN senai.pessoa p
 USING ( id_pessoa ) WHERE id_diario = '64469' AND ced.id_evento NOT IN (
 SELECT ec.id_evento_sobreposto FROM recurso.evento_conflito ec WHERE
 ec.id_evento_sobreposto = ced.id_evento AND ec.ignorado IS NULL ) AND
 ced.inicio BETWEEN '2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99'
 ORDER BY inicio;

 server 9.1:
 http://explain.depesz.com/s/fmM

 server 9.2:
 http://explain.depesz.com/s/wXm

 After run this one on server 9.2, RES memory reached 6.5GB, VIRT 15GB.


Since there is no response, is this memory usage normal? The same query on
version 9.1 doesn't use that much memory.

I'm concerned about this because there is just only one report like that.
Does someone else has the same pattern when using inherited tables?

Just for information, my schema uses one table that is inherited by all
others tables; it is an audit record: creator, creation time, creator
application, updater, update time, updater application, table name and
record id.

Thanks,
-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-23 Thread Adrian Klaver

On 04/23/2013 04:23 AM, Daniel Cristian Cruz wrote:

2013/4/22 Daniel Cristian Cruz danielcrist...@gmail.com
mailto:danielcrist...@gmail.com

query1:
EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino,
ced.evento, ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo,
ced.id_tipo_evento, ced.tipo_evento, ac.media_referencia, p.nome,
ef.nome AS nomeEspacoFisico, ( SELECT count ( pre2.presente )  0
FROM turma.presenca pre2 WHERE pre2.id_aula = ac.id_aula AND
pre2.id_evento = ac.id_evento AND pre2.id_diario = '64469' ) AS
presenca, ced.id_aula FROM recurso.consulta_evento_diario ced LEFT
JOIN recurso.evento e USING ( id_evento ) LEFT JOIN
recurso.espaco_fisico ef USING ( id_espaco_fisico ) LEFT JOIN
turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
recurso.evento_participante ep USING ( id_evento ) LEFT JOIN
senai.pessoa p USING ( id_pessoa ) WHERE id_diario = '64469' AND
ced.id_evento NOT IN ( SELECT ec.id_evento_sobreposto FROM
recurso.evento_conflito ec WHERE ec.id_evento_sobreposto =
ced.id_evento AND ec.ignorado IS NULL ) AND ced.inicio BETWEEN
'2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99' ORDER BY inicio;

server 9.1:
http://explain.depesz.com/s/fmM

server 9.2:
http://explain.depesz.com/s/wXm

After run this one on server 9.2, RES memory reached 6.5GB, VIRT 15GB.


Since there is no response, is this memory usage normal? The same query
on version 9.1 doesn't use that much memory.


Not sure how it applies but I noticed that a GroupAggregate in 9.1 that 
took 1.22 secs became a a HashAggregate in the 9.2 query and took 12.54 
secs.




I'm concerned about this because there is just only one report like
that. Does someone else has the same pattern when using inherited tables?


Also noticed that in your 9.2 production conf:

(no constraint_exclusion set)

Does this mean the default of 'partition' was left as is or that the 
setting was set to 'off'?




Just for information, my schema uses one table that is inherited by all
others tables; it is an audit record: creator, creation time, creator
application, updater, update time, updater application, table name and
record id.

Thanks,
--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-23 Thread Daniel Cristian Cruz
2013/4/23 Adrian Klaver adrian.kla...@gmail.com

 On 04/23/2013 04:23 AM, Daniel Cristian Cruz wrote:

 2013/4/22 Daniel Cristian Cruz danielcrist...@gmail.com
 mailto:danielcristian@gmail.**com danielcrist...@gmail.com


 query1:
 EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino,
 ced.evento, ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo,
 ced.id_tipo_evento, ced.tipo_evento, ac.media_referencia, p.nome,
 ef.nome AS nomeEspacoFisico, ( SELECT count ( pre2.presente )  0
 FROM turma.presenca pre2 WHERE pre2.id_aula = ac.id_aula AND
 pre2.id_evento = ac.id_evento AND pre2.id_diario = '64469' ) AS
 presenca, ced.id_aula FROM recurso.consulta_evento_diario ced LEFT
 JOIN recurso.evento e USING ( id_evento ) LEFT JOIN
 recurso.espaco_fisico ef USING ( id_espaco_fisico ) LEFT JOIN
 turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
 recurso.evento_participante ep USING ( id_evento ) LEFT JOIN
 senai.pessoa p USING ( id_pessoa ) WHERE id_diario = '64469' AND
 ced.id_evento NOT IN ( SELECT ec.id_evento_sobreposto FROM
 recurso.evento_conflito ec WHERE ec.id_evento_sobreposto =
 ced.id_evento AND ec.ignorado IS NULL ) AND ced.inicio BETWEEN
 '2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99' ORDER BY
 inicio;

 server 9.1:
 http://explain.depesz.com/s/**fmM http://explain.depesz.com/s/fmM

 server 9.2:
 http://explain.depesz.com/s/**wXm http://explain.depesz.com/s/wXm

 After run this one on server 9.2, RES memory reached 6.5GB, VIRT 15GB.


 Since there is no response, is this memory usage normal? The same query
 on version 9.1 doesn't use that much memory.


 Not sure how it applies but I noticed that a GroupAggregate in 9.1 that
 took 1.22 secs became a a HashAggregate in the 9.2 query and took 12.54
 secs.


I used to read a explain and find something, but this one is huge.
Unfortunately I'm still working on data migration from the 9.2 to 9.1 and
didn't get time to read it in detail...

I'm concerned about this because there is just only one report like
 that. Does someone else has the same pattern when using inherited tables?


 Also noticed that in your 9.2 production conf:

 (no constraint_exclusion set)

 Does this mean the default of 'partition' was left as is or that the
 setting was set to 'off'?


No, default:

senai=# show constraint_exclusion ;
 constraint_exclusion
--
 partition
(1 row)

-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-22 Thread Daniel Cristian Cruz
I'm running pgBadger over the log, and will get some queries to explain
analyze them.

The 9.2 cluster is running in the same server as the production, so I will
try to compare some critical explains and publish on explain.depesz.com.

2013/4/21 Adrian Klaver adrian.kla...@gmail.com

 On 04/21/2013 09:29 AM, Daniel Cristian Cruz wrote:




 2013/4/21 Adrian Klaver adrian.kla...@gmail.com
 mailto:adrian.klaver@gmail.**com adrian.kla...@gmail.com




 1)
 Major upgrade from 9.1.4 to 9.2.4.
 Used pg_upgrade
 Tested on VM with 9.2.4 and no problems.
 Same machine used for production server 9.1.4 and 9.2.4
 When complex queries where run on production server under 9.2.4
 memory usage climbed out of control.

 Unanswered questions:

 a) Data set sizes between test and production machines, how do they
 differ?


 It's the same on both; we do a dump/restore every day to the development
 / issue team work.


 Which begs the question, what is different about your test setup that
 makes it not act up?


test environment:

virtual server
8 cores
12 GB RAM
4GB SWAP

max_connections = 200
shared_buffers = 800MB
temp_buffers = 32MB
work_mem = 256MB
maintenance_work_mem = 768MB
(no max_stack_depth set)
shared_preload_libraries =
'$libdir/plpgsql,$libdir/plpython2,$libdir/pgxml,$libdir/pg_stat_statements'
wal_level = minimal
checkpoint_segments = 15
archive_mode = off
max_wal_senders = 0
(no effective_cache_size set)
constraint_exclusion = partition
log_min_duration_statement = 1000
(no log_temp_files set)
(no statement_timeout set)
max_locks_per_transaction = 1024

production:

true server
24 cores
96GB RAM
50GB SWAP

max_connections = 1000
shared_buffers = 6GB
temp_buffers = 24MB
work_mem = 24MB
maintenance_work_mem = 128MB
max_stack_depth = 8MB
(no shared_preload_libraries, disabled after problems because
pg_stat_statements was a new module)
wal_level = hot_standby
checkpoint_segments = 20
archive_mode = on
(archive_command set)
max_wal_senders = 1
effective_cache_size = 32GB
(no constraint_exclusion set)
log_min_duration_statement = 5000
log_temp_files = 0
statement_timeout = 30
(no max_locks_per_transaction set)


 We know that the test servers are running on VMs with fewer resources than
 the production server.

 So:

 Are the VMs running the same OS and OS version as the production server?


test: Red Hat Enterprise Linux Server release 5.5, Linux
2.6.18-194.26.1.el5 #1 SMP Fri Oct 29 14:21:16 EDT 2010 x86_64 x86_64
x86_64 GNU/Linux
production: CentOS release 5.5, Linux 2.6.18-194.32.1.el5 #1 SMP Wed Jan 5
17:52:25 EST 2011 x86_64 x86_64 x86_64 GNU/Linux



 What are 'hardware differences' between the test VMs and the physical
 server?


above.



 Are the Postgres configurations different for the test vs production
 servers?


Yes, some of them, shown above.


 I would guess the usage pattern is different, but in what way?
Number of connections/sessions?


300 connections in production, 50 in test.


INSERT/UPDATE/DELETE pattern?


test: just test cases, development cases and issue cases
production: for 1 minute (00:01:00.076343), 583 inserts, 306 updates and 13
deletes and 3300 transactions (xacts_commits)



Client software using the database?


Sites using PHP with and without connection pool and with and without
Doctrine; a huge system with Java and Hibernate using the JBoss pooler,
Java aplication is the main user.






 b) What are the EXPLAIN/ANALYZE results for a query on 9.1.4, 9.2.4
 test and 9.2.4 production?
 Since there is no single query causing  the problem, I don't know if it
 could help


 For a lack of anything else pick one and try it on the various servers to
 see if something stands out.


Yes , here it is:

query1:
EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino, ced.evento,
ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo, ced.id_tipo_evento,
ced.tipo_evento, ac.media_referencia, p.nome, ef.nome AS nomeEspacoFisico,
( SELECT count ( pre2.presente )  0 FROM turma.presenca pre2 WHERE
pre2.id_aula = ac.id_aula AND pre2.id_evento = ac.id_evento AND
pre2.id_diario = '64469' ) AS presenca, ced.id_aula FROM
recurso.consulta_evento_diario ced LEFT JOIN recurso.evento e USING (
id_evento ) LEFT JOIN recurso.espaco_fisico ef USING ( id_espaco_fisico )
LEFT JOIN turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
recurso.evento_participante ep USING ( id_evento ) LEFT JOIN senai.pessoa p
USING ( id_pessoa ) WHERE id_diario = '64469' AND ced.id_evento NOT IN (
SELECT ec.id_evento_sobreposto FROM recurso.evento_conflito ec WHERE
ec.id_evento_sobreposto = ced.id_evento AND ec.ignorado IS NULL ) AND
ced.inicio BETWEEN '2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99'
ORDER BY inicio;

server 9.1:
http://explain.depesz.com/s/fmM

server 9.2:
http://explain.depesz.com/s/wXm

After run this one on server 9.2, RES memory reached 6.5GB, VIRT 15GB.

query 2:
EXPLAIN ANALYZE SELECT count ( 

Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Tomas Vondra
Hi,

we got a report of (probably) the same issue on a local mailing list.
Maybe it'll help in finding the root cause, so I'm resending the info
here too.

On 21.4.2013 01:19, Adrian Klaver wrote:
 On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:
 I think I didn't make it clear: the session memory usage is growing up
 too fast, until all server memory got used and swap occurs.

 Never saw something like that. The version is under a test enviroment
 for a long time...

 Thanks if someone could help me.
 
 Before any one can help I would think more information is needed;
 
 1) Is it on the same machine/OS as the old version?

Yes. This was an upgrade from 9.2.3 to 9.2.4, so this was the usual
minor upgrade procedure - stop, install 9.2.4 package, start.

AFAIK there was no other change (e.g. update of kernel).

 2) What are the hardware specs for the machine?

32GB of RAM, 6 cores. I don't know which linux distribution they run.

The interesting part is they have a lot of tables due to a partitioned
schema. In total there's ~9500 tables.

 3) Is it still in test mode or in production?

It's in production for a long time and so far it was running fine, until
the upgrade to 9.2.4.

 4) You seem to imply that in test mode everything worked alright, is
 that the case?

It was working fine in the production (exactly the same workload) for a
long time (few months at least).

 5) In either case, test/production, what is being done in the session(s)?

Simple selects, mostly index scans, nothing complex or time consuming.

There's not a particular query that crashes, it's rather about a
combination of queries.

 6) Is there anything in the Postgres logs that might shed light?

I do have a log with the memory context info printed after the OOM
killed the session - see it attached.

Tomas


crash.log.gz
Description: application/gzip

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 I do have a log with the memory context info printed after the OOM
 killed the session - see it attached.

The only thing that seems rather bloated is the CacheMemoryContext,
which seems to be because the backend has cached info about several
thousand tables and indexes.  Given that you say there's 9500 relations
in their schema, it's hard to believe that 9.2.4 is suddenly doing that
where 9.2.3 didn't.  I'm wondering if they've done something else that
restricted the amount of memory available to a backend.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Daniel Cristian Cruz
2013/4/21 Tom Lane t...@sss.pgh.pa.us

 Tomas Vondra t...@fuzzy.cz writes:
  I do have a log with the memory context info printed after the OOM
  killed the session - see it attached.

 The only thing that seems rather bloated is the CacheMemoryContext,
 which seems to be because the backend has cached info about several
 thousand tables and indexes.  Given that you say there's 9500 relations
 in their schema, it's hard to believe that 9.2.4 is suddenly doing that
 where 9.2.3 didn't.  I'm wondering if they've done something else that
 restricted the amount of memory available to a backend.


Maybe, since I'm running the same server and top shows a RES size a bit
large for idle sessions. Not so large than 9.2. Bellow is the actual server
top.

top - 10:30:35 up 385 days, 14:35,  1 user,  load average: 1.48, 1.32, 1.28
Tasks: 668 total,   5 running, 663 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.3%us,  0.7%sy,  0.0%ni, 87.6%id,  1.3%wa,  0.0%hi,  0.1%si,
 0.0%st
Mem:  98923768k total, 95618640k used,  3305128k free,   232888k buffers
Swap: 49150856k total,   264284k used, 48886572k free, 91567048k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND




32497 pg91  15   0 6462m 3.5g 3.5g S  0.0  3.7   0:13.13 postgres:
writer process



10988 pg91  15   0 6475m 1.7g 1.7g S  0.0  1.9   0:40.74 postgres:
integracao senai 10.1.3.200(57290) idle



18518 pg91  18   0 6475m 1.7g 1.7g S  0.0  1.9   0:36.38 postgres:
integracao senai 10.1.3.200(51766) idle



23965 pg91  25   0 6528m 1.3g 1.2g S  0.0  1.3   1:09.19 postgres: sgn2
senai 10.1.3.1(8353) idle



30504 pg91  15   0 6700m 1.2g 1.0g S  0.0  1.3   0:17.64 postgres: sgn2
senai 10.1.3.1(20575) idle



 1399 pg91  17   0 6515m 1.2g 1.2g S  0.0  1.3   0:56.62 postgres: sgn2
senai 10.1.3.1(52594) idle



 5732 pg91  15   0 6521m 1.1g 1.1g S  0.0  1.2   0:33.35 postgres: sgn2
senai 10.1.3.1(57789) idle



 8223 pg91  15   0 6520m 1.1g 1.1g S  0.0  1.2   0:22.02 postgres: sgn2
senai 10.1.3.1(39002) idle



 7244 pg91  16   0 6527m 1.1g 1.0g R  6.1  1.2   0:14.65 postgres: sgn2
senai 10.1.3.1(58921) SELECT



 7916 pg91  15   0 6527m 1.1g 1.0g S  0.0  1.1   0:32.47 postgres: sgn2
senai 10.1.3.1(38869) idle



29701 pg91  15   0 6517m 1.0g 1.0g S  0.0  1.1   0:08.02 postgres: sgn2
senai 10.1.3.1(47910) idle



17445 pg91  17   0 6519m 1.0g 1.0g S  0.0  1.1   0:06.75 postgres: sgn2
senai 10.1.3.1(10035) idle



 1398 pg91  17   0 6513m 1.0g 951m S  0.0  1.0   1:46.55 postgres: sgn2
senai 10.1.3.1(26616) idle



30155 pg91  15   0 6496m 825m 792m S  0.0  0.9   0:16.80 postgres: sgn2
senai 10.1.3.1(20472) idle



 8225 pg91  15   0 6511m 743m 696m S  0.0  0.8   0:17.39 postgres: sgn2
senai 10.1.3.1(59397) idle



30156 pg91  15   0 6492m 712m 683m S  0.0  0.7   0:18.27 postgres: sgn2
senai 10.1.3.1(48402) idle



29471 pg91  15   0 6514m 559m 508m S  0.0  0.6   0:11.30 postgres: sgn2
senai 10.1.3.1(47784) idle



 8314 pg91  15   0 6695m 447m 225m S  0.0  0.5   0:14.44 postgres: sgn2
senai 10.1.3.1(59469) idle



13807 pg91  15   0 6492m 399m 369m S  0.0  0.4   2:00.32 postgres: sgn2
senai 10.1.3.1(31021) idle



18621 pg91  17   0 6554m 370m 294m R 44.4  0.4   0:01.17 postgres: sgn2
senai 10.1.3.1(61676) SELECT



15904 pg91  17   0 6507m 316m 273m R 66.4  0.3   0:04.10 postgres: sgn2
senai 10.1.3.1(7043) SELECT



28055 pg91  18   0 6493m 236m 203m S  0.0  0.2   0:02.72 postgres: sgn2
senai 10.1.3.1(14902) idle



 8313 pg91  15   0 6684m 228m  85m S  0.0  0.2   0:00.59 postgres: sgn2
senai 10.1.3.1(59466) idle



30503 pg91  15   0 6682m 212m  71m S  0.0  0.2   0:00.43 postgres: sgn2
senai 10.1.3.1(48499) idle




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Adrian Klaver

On 04/20/2013 05:19 PM, Daniel Cristian Cruz wrote:

Copying to list to fill in blanks.





2013/4/20 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com

On 04/20/2013 04:30 PM, Daniel Cristian Cruz wrote:




2013/4/20 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com
mailto:adrian.klaver@gmail.__com mailto:adrian.kla...@gmail.com


 On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:

 I think I didn't make it clear: the session memory usage is
 growing up
 too fast, until all server memory got used and swap occurs.

 Never saw something like that. The version is under a test
 enviroment
 for a long time...

 Thanks if someone could help me.


 Before any one can help I would think more information is
needed;

 1) Is it on the same machine/OS as the old version?


Yes same machine, CentOS 5.5, just upgraded PostgreSQL only


How did you upgrade?
 pg_upgrade
 dump/restore


pg_upgrade




 4) You seem to imply that in test mode everything worked
alright, is
 that the case?


Yes, got two servers, never got the same issue


So what difference is there between the test and production servers?


A real server, two VMs with very less memory and CPUs





 5) In either case, test/production, what is being done in the
 session(s)?


Some complex queries. Some very complex queries...


Any chance to see an EXPLAIN ANALYZE for query on test machine vs
production?


Right now I'm building a tool to import the loose records from the new
version to the old, since I'm doing a rollback over the upgrade. As soon
I finish it, I could debug what is happening in the cluster.









 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
adrian.kla...@gmail.com mailto:adrian.kla...@gmail.com
mailto:adrian.klaver@gmail.__com mailto:adrian.kla...@gmail.com





--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com mailto:adrian.kla...@gmail.com




--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Adrian Klaver

On 04/21/2013 06:37 AM, Daniel Cristian Cruz wrote:

2013/4/21 Tom Lane t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us

Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz writes:
  I do have a log with the memory context info printed after the OOM
  killed the session - see it attached.

The only thing that seems rather bloated is the CacheMemoryContext,
which seems to be because the backend has cached info about several
thousand tables and indexes.  Given that you say there's 9500 relations
in their schema, it's hard to believe that 9.2.4 is suddenly doing that
where 9.2.3 didn't.  I'm wondering if they've done something else that
restricted the amount of memory available to a backend.


Maybe, since I'm running the same server and top shows a RES size a bit
large for idle sessions. Not so large than 9.2. Bellow is the actual
server top.


Just to be clear the below is for the 9.1.4 server you rolled backed to?



top - 10:30:35 up 385 days, 14:35,  1 user,  load average: 1.48, 1.32, 1.28
Tasks: 668 total,   5 running, 663 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.3%us,  0.7%sy,  0.0%ni, 87.6%id,  1.3%wa,  0.0%hi,  0.1%si,
  0.0%st
Mem:  98923768k total, 95618640k used,  3305128k free,   232888k buffers
Swap: 49150856k total,   264284k used, 48886572k free, 91567048k cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
32497 pg91  15   0 6462m 3.5g 3.5g S  0.0  3.7   0:13.13 postgres:
writer process
10988 pg91  15   0 6475m 1.7g 1.7g S  0.0  1.9   0:40.74 postgres:
integracao senai 10.1.3.200(57290) idle
18518 pg91  18   0 6475m 1.7g 1.7g S  0.0  1.9   0:36.38 postgres:
integracao senai 10.1.3.200(51766) idle
23965 pg91  25   0 6528m 1.3g 1.2g S  0.0  1.3   1:09.19 postgres:
sgn2 senai 10.1.3.1(8353) idle
30504 pg91  15   0 6700m 1.2g 1.0g S  0.0  1.3   0:17.64 postgres:
sgn2 senai 10.1.3.1(20575) idle
  1399 pg91  17   0 6515m 1.2g 1.2g S  0.0  1.3   0:56.62 postgres:
sgn2 senai 10.1.3.1(52594) idle
  5732 pg91  15   0 6521m 1.1g 1.1g S  0.0  1.2   0:33.35 postgres:
sgn2 senai 10.1.3.1(57789) idle
  8223 pg91  15   0 6520m 1.1g 1.1g S  0.0  1.2   0:22.02 postgres:
sgn2 senai 10.1.3.1(39002) idle
  7244 pg91  16   0 6527m 1.1g 1.0g R  6.1  1.2   0:14.65 postgres:
sgn2 senai 10.1.3.1(58921) SELECT
  7916 pg91  15   0 6527m 1.1g 1.0g S  0.0  1.1   0:32.47 postgres:
sgn2 senai 10.1.3.1(38869) idle
29701 pg91  15   0 6517m 1.0g 1.0g S  0.0  1.1   0:08.02 postgres:
sgn2 senai 10.1.3.1(47910) idle
17445 pg91  17   0 6519m 1.0g 1.0g S  0.0  1.1   0:06.75 postgres:
sgn2 senai 10.1.3.1(10035) idle
  1398 pg91  17   0 6513m 1.0g 951m S  0.0  1.0   1:46.55 postgres:
sgn2 senai 10.1.3.1(26616) idle
30155 pg91  15   0 6496m 825m 792m S  0.0  0.9   0:16.80 postgres:
sgn2 senai 10.1.3.1(20472) idle
  8225 pg91  15   0 6511m 743m 696m S  0.0  0.8   0:17.39 postgres:
sgn2 senai 10.1.3.1(59397) idle
30156 pg91  15   0 6492m 712m 683m S  0.0  0.7   0:18.27 postgres:
sgn2 senai 10.1.3.1(48402) idle
29471 pg91  15   0 6514m 559m 508m S  0.0  0.6   0:11.30 postgres:
sgn2 senai 10.1.3.1(47784) idle
  8314 pg91  15   0 6695m 447m 225m S  0.0  0.5   0:14.44 postgres:
sgn2 senai 10.1.3.1(59469) idle
13807 pg91  15   0 6492m 399m 369m S  0.0  0.4   2:00.32 postgres:
sgn2 senai 10.1.3.1(31021) idle
18621 pg91  17   0 6554m 370m 294m R 44.4  0.4   0:01.17 postgres:
sgn2 senai 10.1.3.1(61676) SELECT
15904 pg91  17   0 6507m 316m 273m R 66.4  0.3   0:04.10 postgres:
sgn2 senai 10.1.3.1(7043) SELECT
28055 pg91  18   0 6493m 236m 203m S  0.0  0.2   0:02.72 postgres:
sgn2 senai 10.1.3.1(14902) idle
  8313 pg91  15   0 6684m 228m  85m S  0.0  0.2   0:00.59 postgres:
sgn2 senai 10.1.3.1(59466) idle
30503 pg91  15   0 6682m 212m  71m S  0.0  0.2   0:00.43 postgres:
sgn2 senai 10.1.3.1(48499) idle

--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Tomas Vondra
On 21.4.2013 15:14, Tom Lane wrote:
 Tomas Vondra t...@fuzzy.cz writes:
 I do have a log with the memory context info printed after the OOM
 killed the session - see it attached.
 
 The only thing that seems rather bloated is the CacheMemoryContext,
 which seems to be because the backend has cached info about several
 thousand tables and indexes.  Given that you say there's 9500 relations
 in their schema, it's hard to believe that 9.2.4 is suddenly doing that
 where 9.2.3 didn't.  I'm wondering if they've done something else that
 restricted the amount of memory available to a backend.

My thoughts, exactly. I can't really compare the CacheMemoryContext to
the 9.1.3, as I have no data from that version. So maybe it really did
not change, but something else obviously did.

I'm not aware of any other changes, but I'll verify that.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Daniel Cristian Cruz
2013/4/21 Adrian Klaver adrian.kla...@gmail.com

 On 04/21/2013 06:37 AM, Daniel Cristian Cruz wrote:

 2013/4/21 Tom Lane t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us


 Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz writes:
   I do have a log with the memory context info printed after the OOM
   killed the session - see it attached.

 The only thing that seems rather bloated is the CacheMemoryContext,
 which seems to be because the backend has cached info about several
 thousand tables and indexes.  Given that you say there's 9500
 relations
 in their schema, it's hard to believe that 9.2.4 is suddenly doing
 that
 where 9.2.3 didn't.  I'm wondering if they've done something else that
 restricted the amount of memory available to a backend.


 Maybe, since I'm running the same server and top shows a RES size a bit
 large for idle sessions. Not so large than 9.2. Bellow is the actual
 server top.


 Just to be clear the below is for the 9.1.4 server you rolled backed to?


Yes.





 top - 10:30:35 up 385 days, 14:35,  1 user,  load average: 1.48, 1.32,
 1.28
 Tasks: 668 total,   5 running, 663 sleeping,   0 stopped,   0 zombie
 Cpu(s): 10.3%us,  0.7%sy,  0.0%ni, 87.6%id,  1.3%wa,  0.0%hi,  0.1%si,
   0.0%st
 Mem:  98923768k total, 95618640k used,  3305128k free,   232888k buffers
 Swap: 49150856k total,   264284k used, 48886572k free, 91567048k cached

PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 32497 pg91  15   0 6462m 3.5g 3.5g S  0.0  3.7   0:13.13 postgres:
 writer process
 10988 pg91  15   0 6475m 1.7g 1.7g S  0.0  1.9   0:40.74 postgres:
 integracao senai 10.1.3.200(57290) idle
 18518 pg91  18   0 6475m 1.7g 1.7g S  0.0  1.9   0:36.38 postgres:
 integracao senai 10.1.3.200(51766) idle
 23965 pg91  25   0 6528m 1.3g 1.2g S  0.0  1.3   1:09.19 postgres:
 sgn2 senai 10.1.3.1(8353) idle
 30504 pg91  15   0 6700m 1.2g 1.0g S  0.0  1.3   0:17.64 postgres:
 sgn2 senai 10.1.3.1(20575) idle
   1399 pg91  17   0 6515m 1.2g 1.2g S  0.0  1.3   0:56.62 postgres:
 sgn2 senai 10.1.3.1(52594) idle
   5732 pg91  15   0 6521m 1.1g 1.1g S  0.0  1.2   0:33.35 postgres:
 sgn2 senai 10.1.3.1(57789) idle
   8223 pg91  15   0 6520m 1.1g 1.1g S  0.0  1.2   0:22.02 postgres:
 sgn2 senai 10.1.3.1(39002) idle
   7244 pg91  16   0 6527m 1.1g 1.0g R  6.1  1.2   0:14.65 postgres:
 sgn2 senai 10.1.3.1(58921) SELECT
   7916 pg91  15   0 6527m 1.1g 1.0g S  0.0  1.1   0:32.47 postgres:
 sgn2 senai 10.1.3.1(38869) idle
 29701 pg91  15   0 6517m 1.0g 1.0g S  0.0  1.1   0:08.02 postgres:
 sgn2 senai 10.1.3.1(47910) idle
 17445 pg91  17   0 6519m 1.0g 1.0g S  0.0  1.1   0:06.75 postgres:
 sgn2 senai 10.1.3.1(10035) idle
   1398 pg91  17   0 6513m 1.0g 951m S  0.0  1.0   1:46.55 postgres:
 sgn2 senai 10.1.3.1(26616) idle
 30155 pg91  15   0 6496m 825m 792m S  0.0  0.9   0:16.80 postgres:
 sgn2 senai 10.1.3.1(20472) idle
   8225 pg91  15   0 6511m 743m 696m S  0.0  0.8   0:17.39 postgres:
 sgn2 senai 10.1.3.1(59397) idle
 30156 pg91  15   0 6492m 712m 683m S  0.0  0.7   0:18.27 postgres:
 sgn2 senai 10.1.3.1(48402) idle
 29471 pg91  15   0 6514m 559m 508m S  0.0  0.6   0:11.30 postgres:
 sgn2 senai 10.1.3.1(47784) idle
   8314 pg91  15   0 6695m 447m 225m S  0.0  0.5   0:14.44 postgres:
 sgn2 senai 10.1.3.1(59469) idle
 13807 pg91  15   0 6492m 399m 369m S  0.0  0.4   2:00.32 postgres:
 sgn2 senai 10.1.3.1(31021) idle
 18621 pg91  17   0 6554m 370m 294m R 44.4  0.4   0:01.17 postgres:
 sgn2 senai 10.1.3.1(61676) SELECT
 15904 pg91  17   0 6507m 316m 273m R 66.4  0.3   0:04.10 postgres:
 sgn2 senai 10.1.3.1(7043) SELECT
 28055 pg91  18   0 6493m 236m 203m S  0.0  0.2   0:02.72 postgres:
 sgn2 senai 10.1.3.1(14902) idle
   8313 pg91  15   0 6684m 228m  85m S  0.0  0.2   0:00.59 postgres:
 sgn2 senai 10.1.3.1(59466) idle
 30503 pg91  15   0 6682m 212m  71m S  0.0  0.2   0:00.43 postgres:
 sgn2 senai 10.1.3.1(48499) idle

 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
 adrian.kla...@gmail.com



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Adrian Klaver

On 04/21/2013 07:50 AM, Daniel Cristian Cruz wrote:




2013/4/21 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com

On 04/21/2013 06:37 AM, Daniel Cristian Cruz wrote:

2013/4/21 Tom Lane t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us
mailto:t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us


 Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz
mailto:t...@fuzzy.cz mailto:t...@fuzzy.cz writes:
   I do have a log with the memory context info printed
after the OOM
   killed the session - see it attached.

 The only thing that seems rather bloated is the
CacheMemoryContext,
 which seems to be because the backend has cached info about
several
 thousand tables and indexes.  Given that you say there's
9500 relations
 in their schema, it's hard to believe that 9.2.4 is
suddenly doing that
 where 9.2.3 didn't.  I'm wondering if they've done
something else that
 restricted the amount of memory available to a backend.


Maybe, since I'm running the same server and top shows a RES
size a bit
large for idle sessions. Not so large than 9.2. Bellow is the actual
server top.


Just to be clear the below is for the 9.1.4 server you rolled backed to?


Yes.


To recap for those following along, there are two different cases in 
play here.


1)
Major upgrade from 9.1.4 to 9.2.4.
Used pg_upgrade
Tested on VM with 9.2.4 and no problems.
Same machine used for production server 9.1.4 and 9.2.4
When complex queries where run on production server under 9.2.4 memory 
usage climbed out of control.


Unanswered questions:

a) Data set sizes between test and production machines, how do they differ?

b) What are the EXPLAIN/ANALYZE results for a query on 9.1.4, 9.2.4 test 
and 9.2.4 production?


2)
Minor upgrade 9.2.3 to 9.2.4
No test server, went production to production.
Same machine.
When some combination of queries where run on 9.2.4 memory usage climbed 
out of control.







--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル




--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Daniel Cristian Cruz
2013/4/21 Adrian Klaver adrian.kla...@gmail.com

 On 04/21/2013 07:50 AM, Daniel Cristian Cruz wrote:




 2013/4/21 Adrian Klaver adrian.kla...@gmail.com
 mailto:adrian.klaver@gmail.**com adrian.kla...@gmail.com


 On 04/21/2013 06:37 AM, Daniel Cristian Cruz wrote:

 2013/4/21 Tom Lane t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us
 mailto:t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us



  Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz
 mailto:t...@fuzzy.cz mailto:t...@fuzzy.cz writes:
I do have a log with the memory context info printed
 after the OOM
killed the session - see it attached.

  The only thing that seems rather bloated is the
 CacheMemoryContext,
  which seems to be because the backend has cached info about
 several
  thousand tables and indexes.  Given that you say there's
 9500 relations
  in their schema, it's hard to believe that 9.2.4 is
 suddenly doing that
  where 9.2.3 didn't.  I'm wondering if they've done
 something else that
  restricted the amount of memory available to a backend.


 Maybe, since I'm running the same server and top shows a RES
 size a bit
 large for idle sessions. Not so large than 9.2. Bellow is the
 actual
 server top.


 Just to be clear the below is for the 9.1.4 server you rolled backed
 to?


 Yes.


 To recap for those following along, there are two different cases in play
 here.

 1)
 Major upgrade from 9.1.4 to 9.2.4.
 Used pg_upgrade
 Tested on VM with 9.2.4 and no problems.
 Same machine used for production server 9.1.4 and 9.2.4
 When complex queries where run on production server under 9.2.4 memory
 usage climbed out of control.

 Unanswered questions:

 a) Data set sizes between test and production machines, how do they differ?


It's the same on both; we do a dump/restore every day to the development /
issue team work.



 b) What are the EXPLAIN/ANALYZE results for a query on 9.1.4, 9.2.4 test
 and 9.2.4 production?


Since there is no single query causing  the problem, I don't know if it
could help.

One thing I didn't mention: I build a record table used to track every
transaction on database, and all tables are inherited from it. This way I
easily found the records to transfer from 9.2.4 to 9.1.x.

But it's something similar with the other case, where they are using for
partitioning purposes, and I'm using to simplify the model (the inherited
table is hidden in the model).

Thanks,
-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Adrian Klaver

On 04/21/2013 09:29 AM, Daniel Cristian Cruz wrote:




2013/4/21 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com





1)
Major upgrade from 9.1.4 to 9.2.4.
Used pg_upgrade
Tested on VM with 9.2.4 and no problems.
Same machine used for production server 9.1.4 and 9.2.4
When complex queries where run on production server under 9.2.4
memory usage climbed out of control.

Unanswered questions:

a) Data set sizes between test and production machines, how do they
differ?


It's the same on both; we do a dump/restore every day to the development
/ issue team work.


Which begs the question, what is different about your test setup that 
makes it not act up?


We know that the test servers are running on VMs with fewer resources 
than the production server.


So:

Are the VMs running the same OS and OS version as the production server?

What are 'hardware differences' between the test VMs and the physical 
server?


Are the Postgres configurations different for the test vs production 
servers?


I would guess the usage pattern is different, but in what way?
   Number of connections/sessions?
   INSERT/UPDATE/DELETE pattern?
   Client software using the database?





b) What are the EXPLAIN/ANALYZE results for a query on 9.1.4, 9.2.4
test and 9.2.4 production?
Since there is no single query causing  the problem, I don't know if it could 
help


For a lack of anything else pick one and try it on the various servers 
to see if something stands out.







Thanks,
--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-20 Thread Daniel Cristian Cruz
I think I didn't make it clear: the session memory usage is growing up too
fast, until all server memory got used and swap occurs.

Never saw something like that. The version is under a test enviroment for a
long time...

Thanks if someone could help me.


2013/4/20 Daniel Cristian Cruz danielcrist...@gmail.com

 Hi all,

 I've upgrade from 9.1.4 to 9.2.4, and got some very weird issue.

 My server got connections with RES (column from top utility) with too much
 memory:

 top - 19:50:58 up 384 days, 23:55,  2 users,  load average: 4.28, 6.51,
 7.68
 Tasks: 417 total,   1 running, 416 sleeping,   0 stopped,   0 zombie
 Cpu(s):  0.5%us,  0.4%sy,  0.0%ni, 94.8%id,  4.3%wa,  0.0%hi,  0.0%si,
  0.0%st
 Mem:  98923768k total, 53750228k used, 45173540k free,46192k buffers
 Swap: 49150856k total, 24147924k used, 25002932k free, 32706740k cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND


 10522 pg92  17   0 16.7g 7.9g 1.7g S  0.0  8.4   2:44.69 postgres:
 sgn2 senai 10.1.3.1(24675) idle

  9367 pg92  17   0 16.6g 6.7g 517m S  0.0  7.1   5:05.98 postgres:
 sgn2 senai 10.1.3.1(44277) idle

 13336 pg92  17   0 16.5g 6.6g 511m S  0.0  7.0   0:14.01 postgres:
 sgn2 senai 10.1.3.1(51436) idle

 26864 pg92  18   0 6487m 6.0g 6.0g S  0.0  6.3   3:20.21 postgres:
 pg92 senai [local] idle

 14360 pg92  15   0 6461m 1.2g 1.2g S  0.0  1.3   0:02.46 postgres:
 integracao senai 10.1.3.200(59197) idle

 14678 pg92  15   0 6461m 1.2g 1.2g S  0.0  1.3   0:01.34 postgres:
 integracao senai 10.1.3.200(59201) idle

 10597 pg92  17   0 6513m 878m 830m S  0.0  0.9   0:08.81 postgres:
 integracao senai 10.1.3.200(38519) idle

  9301 pg92  15   0 6456m 188m 187m S  0.0  0.2   0:08.98
 /home/pg92/bin/postgres

  9408 pg92  17   0 6483m 115m  91m S  0.0  0.1   3:02.13 postgres:
 sgn2 senai 10.1.3.1(44310) idle

  9306 pg92  15   0 6459m  72m  70m S  0.0  0.1   0:00.93 postgres:
 writer process

  9305 pg92  16   0 6459m  60m  59m S  0.0  0.1   0:00.13 postgres:
 checkpointer process

 15028 pg92  16   0 6462m  54m  51m S  0.0  0.1   0:00.07 postgres:
 integracao senai 10.1.3.200(59203) idle

 17779 pg92  15   0 6464m  23m  19m S  0.0  0.0   0:00.30 postgres:
 sgn2 senai 10.1.3.1(34656) idle

  9307 pg92  15   0 6459m 6020 5392 S  0.0  0.0   0:00.01 postgres: wal
 writer process

  9308 pg92  15   0 6462m 4584 1740 S  0.0  0.0   0:04.94 postgres:
 autovacuum launcher process

 17766 pg92  16   0 6460m 3968 2340 S  0.0  0.0   0:00.00 postgres:
 sgn2 senai 10.1.3.1(34642) idle

 17767 pg92  16   0 6460m 3968 2340 S  0.0  0.0   0:00.00 postgres:
 sgn2 senai 10.1.3.1(62906) idle

 (many other conns)

 I don't know why this is happening. I'm stuck with all the same parameters
 from 9.1.4.

 The only solution right now is to rollback to the older version.

 Does someone got something like this before?

 Thanks,
 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-20 Thread Adrian Klaver

On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:

I think I didn't make it clear: the session memory usage is growing up
too fast, until all server memory got used and swap occurs.

Never saw something like that. The version is under a test enviroment
for a long time...

Thanks if someone could help me.


Before any one can help I would think more information is needed;

1) Is it on the same machine/OS as the old version?

2) What are the hardware specs for the machine?

3) Is it still in test mode or in production?

4) You seem to imply that in test mode everything worked alright, is 
that the case?


5) In either case, test/production, what is being done in the session(s)?

6) Is there anything in the Postgres logs that might shed light?




--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-20 Thread Daniel Cristian Cruz
2013/4/20 Adrian Klaver adrian.kla...@gmail.com

 On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:

 I think I didn't make it clear: the session memory usage is growing up
 too fast, until all server memory got used and swap occurs.

 Never saw something like that. The version is under a test enviroment
 for a long time...

 Thanks if someone could help me.


 Before any one can help I would think more information is needed;

 1) Is it on the same machine/OS as the old version?


Yes same machine, CentOS 5.5, just upgraded PostgreSQL only


 2) What are the hardware specs for the machine?


CISCO Blade, 96GB RAM, 24 cores


 3) Is it still in test mode or in production?


production sadly


 4) You seem to imply that in test mode everything worked alright, is that
 the case?


Yes, got two servers, never got the same issue


 5) In either case, test/production, what is being done in the session(s)?


Some complex queries. Some very complex queries...


 6) Is there anything in the Postgres logs that might shed light?


Unfortunelly, not at all. Just the same of previous version, some (more)
queries taking longer than 10 seconds.

Old config allowed to use 256MB on work_mem. I reduced to 24MB on new
version, but it still grew up strongly and fast.

Not a clue on why this is happening.





 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
 adrian.kla...@gmail.com




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-20 Thread Adrian Klaver

On 04/20/2013 04:30 PM, Daniel Cristian Cruz wrote:




2013/4/20 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com

On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:

I think I didn't make it clear: the session memory usage is
growing up
too fast, until all server memory got used and swap occurs.

Never saw something like that. The version is under a test
enviroment
for a long time...

Thanks if someone could help me.


Before any one can help I would think more information is needed;

1) Is it on the same machine/OS as the old version?


Yes same machine, CentOS 5.5, just upgraded PostgreSQL only


How did you upgrade?
pg_upgrade
dump/restore



4) You seem to imply that in test mode everything worked alright, is
that the case?


Yes, got two servers, never got the same issue


So what difference is there between the test and production servers?




5) In either case, test/production, what is being done in the
session(s)?


Some complex queries. Some very complex queries...


Any chance to see an EXPLAIN ANALYZE for query on test machine vs 
production?










--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com mailto:adrian.kla...@gmail.com




--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-09 Thread Frank Broniewski
FYI 
http://freebsd.1045724.n5.nabble.com/Postgresql-related-memory-question-td5759467.html



Am 2012-11-07 10:28, schrieb Achilleas Mantzios:

On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:

Hey, this is really cool. I directly tried the script and there's a line
from the output that caught my eye:

   mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN

is this the shared buffers? I guess so, but I want to confirm my guess ...


Hmm, that would be ideal, (from an understanding perspective) but at least in 
my system (FreeBSD-8.3), no.

psql -q -t -c show shared_buffers | grep -v -e '^$' | awk '{print $1}'
3840MB

SYSTEM MEMORY INFORMATION:
mem_gap_vm:  +996843520 (950MB) [  5%] Memory gap: UNKNOWN

$mem_gap_vm  = $mem_all - ($mem_wire + $mem_active + $mem_inactive + $mem_cache 
+ $mem_free);

mem_all is some rounded and more rationalized version less than hw.physmem : $mem_all = 
$sysctl-{vm.stats.vm.v_page_count} * $sysctl-{hw.pagesize};

Anyway, this is not so postgresql related at the moment. The correct thing to 
do (since you run production servers on FreeBSD) is to post to the relevant
FreeBSD list and/or forum. freebsd-questi...@freebsd.org and 
freebsd-sta...@freebsd.org would be a good start.
Also the forums : http://forums.freebsd.org/forumdisplay.php?f=3
Only after gathering substantial info from there, would it make sense to come 
back here and maybe ask more questions.
And since we are observing different percentages of gaps (mine is 5%, yours is 
26%), i think maybe you should look into it on the FreeBSD camp.

Please drop the link to the relevant thread there, if you decide to do so.

I would like to follow this.

Thanx!



Frank

Am 2012-11-07 09:26, schrieb Achilleas Mantzios:

Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the output of 
top.  There are no memory leaks, and FreeBSD doesn't lose any memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT






-
Achilleas Mantzios
IT DEPT




--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-09 Thread Achilleas Mantzios
Thanx for the link.

I just think that it would be a good idea, instead of posting the links at this 
list, to include a
dense but detailed summary of the situation in your machine, and give as much 
data as possible.
In short, you might do a quantitative compilation of this thread, and present 
it in a nice way
in order to gain more attention.
Also, i think posting to -stable would be a better idea, -questions is for 
noobs.

On Παρ 09 Νοε 2012 09:37:14 Frank Broniewski wrote:
 FYI 
 http://freebsd.1045724.n5.nabble.com/Postgresql-related-memory-question-td5759467.html
 
 
 Am 2012-11-07 10:28, schrieb Achilleas Mantzios:
  On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:
  Hey, this is really cool. I directly tried the script and there's a line
  from the output that caught my eye:
 
 mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN
 
  is this the shared buffers? I guess so, but I want to confirm my guess ...
 
  Hmm, that would be ideal, (from an understanding perspective) but at least 
  in my system (FreeBSD-8.3), no.
 
  psql -q -t -c show shared_buffers | grep -v -e '^$' | awk '{print $1}'
  3840MB
 
  SYSTEM MEMORY INFORMATION:
  mem_gap_vm:  +996843520 (950MB) [  5%] Memory gap: UNKNOWN
 
  $mem_gap_vm  = $mem_all - ($mem_wire + $mem_active + $mem_inactive + 
  $mem_cache + $mem_free);
 
  mem_all is some rounded and more rationalized version less than hw.physmem 
  : $mem_all = $sysctl-{vm.stats.vm.v_page_count} * 
  $sysctl-{hw.pagesize};
 
  Anyway, this is not so postgresql related at the moment. The correct thing 
  to do (since you run production servers on FreeBSD) is to post to the 
  relevant
  FreeBSD list and/or forum. freebsd-questi...@freebsd.org and 
  freebsd-sta...@freebsd.org would be a good start.
  Also the forums : http://forums.freebsd.org/forumdisplay.php?f=3
  Only after gathering substantial info from there, would it make sense to 
  come back here and maybe ask more questions.
  And since we are observing different percentages of gaps (mine is 5%, yours 
  is 26%), i think maybe you should look into it on the FreeBSD camp.
 
  Please drop the link to the relevant thread there, if you decide to do so.
 
  I would like to follow this.
 
  Thanx!
 
 
  Frank
 
  Am 2012-11-07 09:26, schrieb Achilleas Mantzios:
  Vick,
  fantastic script, thanx! FreeBSD sysctl system is awesome!
 
  On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:
 
 
 
  On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:
 
  and this is after a few hours of running:
 
  Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
  Swap: 4096M Total, 828K Used, 4095M Free
 
 
 
 
  For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, 
  with Postgres 9.0.7 running since June 10, and is heavily pounded on 
  24x7.  The data + indexes are about 240GB on disk.  This server only runs 
  postgres aside from the basic system processes.
 
 
  Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M 
  Free
  Swap: 4096M Total, 272K Used, 4096M Free
 
 
  I agree with the conclusion that the shared memory segments are confusing 
  the output of top.  There are no memory leaks, and FreeBSD doesn't lose 
  any memory.
 
 
  There are some scripts floating around that read values from sysctl 
  vm.stats.vm and format them nicely to tell you how much memory is used up 
  and free.  Try the one referenced here: 
  http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/
 
 
 
 
 
  -
  Achilleas Mantzios
  IT DEPT
 
 
 
 
  -
  Achilleas Mantzios
  IT DEPT
 
 
 
 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Achilleas Mantzios
Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the 
output of top.  There are no memory leaks, and FreeBSD doesn't lose any 
memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT

Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Frank Broniewski
Hey, this is really cool. I directly tried the script and there's a line 
from the output that caught my eye:


 mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN

is this the shared buffers? I guess so, but I want to confirm my guess ...

Frank

Am 2012-11-07 09:26, schrieb Achilleas Mantzios:

Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the output of 
top.  There are no memory leaks, and FreeBSD doesn't lose any memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT




--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Achilleas Mantzios
On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:
 Hey, this is really cool. I directly tried the script and there's a line 
 from the output that caught my eye:
 
   mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN
 
 is this the shared buffers? I guess so, but I want to confirm my guess ...

Hmm, that would be ideal, (from an understanding perspective) but at least in 
my system (FreeBSD-8.3), no. 

psql -q -t -c show shared_buffers | grep -v -e '^$' | awk '{print $1}'
3840MB

SYSTEM MEMORY INFORMATION:
mem_gap_vm:  +996843520 (950MB) [  5%] Memory gap: UNKNOWN

$mem_gap_vm  = $mem_all - ($mem_wire + $mem_active + $mem_inactive + $mem_cache 
+ $mem_free);

mem_all is some rounded and more rationalized version less than hw.physmem : 
$mem_all = $sysctl-{vm.stats.vm.v_page_count} * $sysctl-{hw.pagesize};

Anyway, this is not so postgresql related at the moment. The correct thing to 
do (since you run production servers on FreeBSD) is to post to the relevant 
FreeBSD list and/or forum. freebsd-questi...@freebsd.org and 
freebsd-sta...@freebsd.org would be a good start. 
Also the forums : http://forums.freebsd.org/forumdisplay.php?f=3
Only after gathering substantial info from there, would it make sense to come 
back here and maybe ask more questions.
And since we are observing different percentages of gaps (mine is 5%, yours is 
26%), i think maybe you should look into it on the FreeBSD camp.

Please drop the link to the relevant thread there, if you decide to do so.

I would like to follow this.

Thanx!

 
 Frank
 
 Am 2012-11-07 09:26, schrieb Achilleas Mantzios:
  Vick,
  fantastic script, thanx! FreeBSD sysctl system is awesome!
 
  On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:
 
 
 
  On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:
 
  and this is after a few hours of running:
 
  Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
  Swap: 4096M Total, 828K Used, 4095M Free
 
 
 
 
  For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
  Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The 
  data + indexes are about 240GB on disk.  This server only runs postgres 
  aside from the basic system processes.
 
 
  Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
  Swap: 4096M Total, 272K Used, 4096M Free
 
 
  I agree with the conclusion that the shared memory segments are confusing 
  the output of top.  There are no memory leaks, and FreeBSD doesn't lose 
  any memory.
 
 
  There are some scripts floating around that read values from sysctl 
  vm.stats.vm and format them nicely to tell you how much memory is used up 
  and free.  Try the one referenced here: 
  http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/
 
 
 
 
 
  -
  Achilleas Mantzios
  IT DEPT
 
 
 
 
-
Achilleas Mantzios
IT DEPT

Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Achilleas Mantzios
Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the 
output of top.  There are no memory leaks, and FreeBSD doesn't lose any 
memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT

Re: [GENERAL] Memory issue on FreeBSD

2012-11-06 Thread Vick Khera
On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

 and this is after a few hours of running:

 Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
 Swap: 4096M Total, 828K Used, 4095M Free


For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The
data + indexes are about 240GB on disk.  This server only runs postgres
aside from the basic system processes.

Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free

I agree with the conclusion that the shared memory segments are confusing
the output of top.  There are no memory leaks, and FreeBSD doesn't lose
any memory.

There are some scripts floating around that read values from sysctl
vm.stats.vm and format them nicely to tell you how much memory is used up
and free.  Try the one referenced here:
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Achilleas Mantzios

ipcs in FreeBSD is a little ... tricky.

ipcs -M
ipcs -m
ipcs -am

could be your friends

On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
 Hi,
 
 I am running a PostgreSQL server on FreeBSD. The system has 32GB memory. 
 Usually I use top to examine the memory usage of the system. After a 
 while, a part, approximately 5GB, vanish from top, so that the memory 
 rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again 
 available, but then it's already slightly decreasing. It's a standalone 
 database server. It has an OpenStreetMap world database running with 
 353GB data (with indices).
 
 Some system information:
 # uname -r
 9.0-RELEASE-p3
 # pg_ctl --version
 pg_ctl (PostgreSQL) 9.1.6
 
 # cat /boot/loader.conf
 ...
 kern.ipc.semmni=256
 kern.ipc.semmns=512
 kern.ipc.semmnu=256
 kern.ipc.semumr=200
 vm.pmap.shpgperproc=400
 vm.pmap.pv_entry_max=50331648
 ...
 
 # cat /pgdata/data/postgresql.conf
 ...
 default_statistics_target = 50 # pgtune wizard 2012-04-04
 maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
 constraint_exclusion = on # pgtune wizard 2012-04-04
 checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
 effective_cache_size = 24GB # pgtune wizard 2012-04-04
 work_mem = 768MB # pgtune wizard 2012-04-04
 wal_buffers = 16MB # pgtune wizard 2012-04-04
 checkpoint_segments = 60 # 20
 shared_buffers = 8GB # pgtune wizard 2012-04-04
 max_connections = 100
 synchronous_commit = off
 
 
 So any help finding out why my system looses some RAM is greatly 
 appreciated :-) If more information is needed I will gladly provide it.
 
 Frank
 
 
 
 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Achilleas Mantzios
(scrap my previous internal email (hence fake) address this one is correct : 
sorry for that)
You can stop pgsql, start it and then watch out for the increase in SEGSZ 
values. I pretty much think they are in bytes. 
I am pretty confident that this value depicts the shared_buffers size in bytes.


On Δευ 05 Νοε 2012 13:14:37 Achilleas Mantzios wrote:
 
 ipcs in FreeBSD is a little ... tricky.
 
 ipcs -M
 ipcs -m
 ipcs -am
 
 could be your friends
 
 On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
  Hi,
  
  I am running a PostgreSQL server on FreeBSD. The system has 32GB memory. 
  Usually I use top to examine the memory usage of the system. After a 
  while, a part, approximately 5GB, vanish from top, so that the memory 
  rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again 
  available, but then it's already slightly decreasing. It's a standalone 
  database server. It has an OpenStreetMap world database running with 
  353GB data (with indices).
  
  Some system information:
  # uname -r
  9.0-RELEASE-p3
  # pg_ctl --version
  pg_ctl (PostgreSQL) 9.1.6
  
  # cat /boot/loader.conf
  ...
  kern.ipc.semmni=256
  kern.ipc.semmns=512
  kern.ipc.semmnu=256
  kern.ipc.semumr=200
  vm.pmap.shpgperproc=400
  vm.pmap.pv_entry_max=50331648
  ...
  
  # cat /pgdata/data/postgresql.conf
  ...
  default_statistics_target = 50 # pgtune wizard 2012-04-04
  maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
  constraint_exclusion = on # pgtune wizard 2012-04-04
  checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
  effective_cache_size = 24GB # pgtune wizard 2012-04-04
  work_mem = 768MB # pgtune wizard 2012-04-04
  wal_buffers = 16MB # pgtune wizard 2012-04-04
  checkpoint_segments = 60 # 20
  shared_buffers = 8GB # pgtune wizard 2012-04-04
  max_connections = 100
  synchronous_commit = off
  
  
  So any help finding out why my system looses some RAM is greatly 
  appreciated :-) If more information is needed I will gladly provide it.
  
  Frank
  
  
  
  
 -
 Achilleas Mantzios
 IT DEPT
 
 
 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Frank Broniewski

Hi,

thank you for your feedback. I had a look at those commands and their 
output, especially in conjunction with the SEGSZ value from icps  -am


Here's an example output:
# ipcs -am
Shared Memory:
T   ID  KEY MODEOWNERGROUPCREATOR 
CGROUP NATTCHSEGSZ CPID LPID ATIME 
DTIMECTIME
m   262144  5432001 --rw--- pgsqlpgsqlpgsqlpgsql 
 12   88139939844551245512 13:49:28 
14:31:29 13:49:28


but frankly this tells me nothing. I can tell that the value SEGSZ is 
right from the start 8813993984 and it doesn't change anymore. The only 
value that changes is the NATTCH value, I observed a range from 8 to 36 
there. I agree that the SEGSZ value matches the 8GB shared buffer, but 
how can I make the connection of my 5GB missing in top? I wonder if this 
might be the maintenance_work_mem, which is set to 4GB?


Many thanks,

Frank

Am 2012-11-05 12:14, schrieb Achilleas Mantzios:


ipcs in FreeBSD is a little ... tricky.

ipcs -M
ipcs -m
ipcs -am

could be your friends

On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:

Hi,

I am running a PostgreSQL server on FreeBSD. The system has 32GB memory.
Usually I use top to examine the memory usage of the system. After a
while, a part, approximately 5GB, vanish from top, so that the memory
rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again
available, but then it's already slightly decreasing. It's a standalone
database server. It has an OpenStreetMap world database running with
353GB data (with indices).

Some system information:
# uname -r
9.0-RELEASE-p3
# pg_ctl --version
pg_ctl (PostgreSQL) 9.1.6

# cat /boot/loader.conf
...
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
kern.ipc.semumr=200
vm.pmap.shpgperproc=400
vm.pmap.pv_entry_max=50331648
...

# cat /pgdata/data/postgresql.conf
...
default_statistics_target = 50 # pgtune wizard 2012-04-04
maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
constraint_exclusion = on # pgtune wizard 2012-04-04
checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
effective_cache_size = 24GB # pgtune wizard 2012-04-04
work_mem = 768MB # pgtune wizard 2012-04-04
wal_buffers = 16MB # pgtune wizard 2012-04-04
checkpoint_segments = 60 # 20
shared_buffers = 8GB # pgtune wizard 2012-04-04
max_connections = 100
synchronous_commit = off


So any help finding out why my system looses some RAM is greatly
appreciated :-) If more information is needed I will gladly provide it.

Frank





-
Achilleas Mantzios
IT DEPT





--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Achilleas Mantzios
How do you measure that smth is missing from top? What values do you add?
I am currently running 8.3 but we shouldn't be so far apart top-wise.
What is the reading under SIZE and RES in top for all postgresql processes?
Take note that shared mem should be recorded for each and every postmaster 
running.

On Δευ 05 Νοε 2012 14:36:44 Frank Broniewski wrote:
 Hi,
 
 thank you for your feedback. I had a look at those commands and their 
 output, especially in conjunction with the SEGSZ value from icps  -am
 
 Here's an example output:
 # ipcs -am
 Shared Memory:
 T   ID  KEY MODEOWNERGROUPCREATOR 
 CGROUP NATTCHSEGSZ CPID LPID ATIME 
 DTIMECTIME
 m   262144  5432001 --rw--- pgsqlpgsqlpgsqlpgsql 
   12   88139939844551245512 13:49:28 
 14:31:29 13:49:28
 
 but frankly this tells me nothing. I can tell that the value SEGSZ is 
 right from the start 8813993984 and it doesn't change anymore. The only 
 value that changes is the NATTCH value, I observed a range from 8 to 36 
 there. I agree that the SEGSZ value matches the 8GB shared buffer, but 
 how can I make the connection of my 5GB missing in top? I wonder if this 
 might be the maintenance_work_mem, which is set to 4GB?
 
 Many thanks,
 
 Frank
 
 Am 2012-11-05 12:14, schrieb Achilleas Mantzios:
 
  ipcs in FreeBSD is a little ... tricky.
 
  ipcs -M
  ipcs -m
  ipcs -am
 
  could be your friends
 
  On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
  Hi,
 
  I am running a PostgreSQL server on FreeBSD. The system has 32GB memory.
  Usually I use top to examine the memory usage of the system. After a
  while, a part, approximately 5GB, vanish from top, so that the memory
  rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again
  available, but then it's already slightly decreasing. It's a standalone
  database server. It has an OpenStreetMap world database running with
  353GB data (with indices).
 
  Some system information:
  # uname -r
  9.0-RELEASE-p3
  # pg_ctl --version
  pg_ctl (PostgreSQL) 9.1.6
 
  # cat /boot/loader.conf
  ...
  kern.ipc.semmni=256
  kern.ipc.semmns=512
  kern.ipc.semmnu=256
  kern.ipc.semumr=200
  vm.pmap.shpgperproc=400
  vm.pmap.pv_entry_max=50331648
  ...
 
  # cat /pgdata/data/postgresql.conf
  ...
  default_statistics_target = 50 # pgtune wizard 2012-04-04
  maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
  constraint_exclusion = on # pgtune wizard 2012-04-04
  checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
  effective_cache_size = 24GB # pgtune wizard 2012-04-04
  work_mem = 768MB # pgtune wizard 2012-04-04
  wal_buffers = 16MB # pgtune wizard 2012-04-04
  checkpoint_segments = 60 # 20
  shared_buffers = 8GB # pgtune wizard 2012-04-04
  max_connections = 100
  synchronous_commit = off
 
 
  So any help finding out why my system looses some RAM is greatly
  appreciated :-) If more information is needed I will gladly provide it.
 
  Frank
 
 
 
 
  -
  Achilleas Mantzios
  IT DEPT
 
 
 
 
 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Frank Broniewski

Hi,

I just add the different memory values together (minus the buffers). 
Usually this sums up (+/-) to the installed memory size, at least on my 
other machines. I found a thread similar to my problem here [1], but no 
solution. I don't mind top showing false values, but if there's a larger 
problem behind this, then I really want to solve it.


Top is really just an indicator for this issue, it's also visible in my 
munin stats [2]


Below is a output _without_ postgresql running:
Mem: 59M Active, 17G Inact, 3953M Wired, 1325M Cache, 3283M Buf, 8663M Free
Swap: 4096M Total, 828K Used, 4095M Free


and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free

some memory related sysctl values:
hw.realmem: 34879832064
hw.physmem: 34322804736
hw.usermem: 30161108992

# sysctl vm.vmtotal
vm.vmtotal:
System wide totals computed every five seconds: (values in kilobytes)
===
Processes:  (RUNQ: 1 Disk Wait: 0 Page Wait: 0 Sleep: 70)
Virtual Memory: (Total: 1084659688K Active: 10400940K)
Real Memory:(Total: 1616176K Active: 1349052K)
Shared Virtual Memory:  (Total: 60840K Active: 14132K)
Shared Real Memory: (Total: 11644K Active: 8388K)
Free Memory Pages:  7263972K


[1] 
http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/061247.html
[2] 
http://www.gis-hosting.lu/monitor/munin/metrico/bilbo.metrico/memory.html



Am 2012-11-05 15:21, schrieb Achilleas Mantzios:

How do you measure that smth is missing from top? What values do you add?
I am currently running 8.3 but we shouldn't be so far apart top-wise.
What is the reading under SIZE and RES in top for all postgresql processes?
Take note that shared mem should be recorded for each and every postmaster 
running.

On Δευ 05 Νοε 2012 14:36:44 Frank Broniewski wrote:

Hi,

thank you for your feedback. I had a look at those commands and their
output, especially in conjunction with the SEGSZ value from icps  -am

Here's an example output:
# ipcs -am
Shared Memory:
T   ID  KEY MODEOWNERGROUPCREATOR
CGROUP NATTCHSEGSZ CPID LPID ATIME
DTIMECTIME
m   262144  5432001 --rw--- pgsqlpgsqlpgsqlpgsql
   12   88139939844551245512 13:49:28
14:31:29 13:49:28

but frankly this tells me nothing. I can tell that the value SEGSZ is
right from the start 8813993984 and it doesn't change anymore. The only
value that changes is the NATTCH value, I observed a range from 8 to 36
there. I agree that the SEGSZ value matches the 8GB shared buffer, but
how can I make the connection of my 5GB missing in top? I wonder if this
might be the maintenance_work_mem, which is set to 4GB?

Many thanks,

Frank

Am 2012-11-05 12:14, schrieb Achilleas Mantzios:


ipcs in FreeBSD is a little ... tricky.

ipcs -M
ipcs -m
ipcs -am

could be your friends

On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:

Hi,

I am running a PostgreSQL server on FreeBSD. The system has 32GB memory.
Usually I use top to examine the memory usage of the system. After a
while, a part, approximately 5GB, vanish from top, so that the memory
rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again
available, but then it's already slightly decreasing. It's a standalone
database server. It has an OpenStreetMap world database running with
353GB data (with indices).

Some system information:
# uname -r
9.0-RELEASE-p3
# pg_ctl --version
pg_ctl (PostgreSQL) 9.1.6

# cat /boot/loader.conf
...
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
kern.ipc.semumr=200
vm.pmap.shpgperproc=400
vm.pmap.pv_entry_max=50331648
...

# cat /pgdata/data/postgresql.conf
...
default_statistics_target = 50 # pgtune wizard 2012-04-04
maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
constraint_exclusion = on # pgtune wizard 2012-04-04
checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
effective_cache_size = 24GB # pgtune wizard 2012-04-04
work_mem = 768MB # pgtune wizard 2012-04-04
wal_buffers = 16MB # pgtune wizard 2012-04-04
checkpoint_segments = 60 # 20
shared_buffers = 8GB # pgtune wizard 2012-04-04
max_connections = 100
synchronous_commit = off


So any help finding out why my system looses some RAM is greatly
appreciated :-) If more information is needed I will gladly provide it.

Frank





-
Achilleas Mantzios
IT DEPT







-
Achilleas Mantzios
IT DEPT





--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Achilleas Mantzios
Since the top reporting goes back to normal when postgresql is stopped ,
and since postgresql is special due to the use of IPC, i would be inclined 
to think that the culprit here is the shared memory.

I don't know where maintenance_work_mem really lives (process normal address 
space or IPC shared mem)
and if that makes any difference. If it is possible you might play with those 
two values and see if anything changes.

Currently i have :

maintenance_work_mem = 960MB # pgtune wizard 2012-11-01
shared_buffers = 3840MB # pgtune wizard 2012-11-01

top:
last pid: 74896;  load averages:  0.02,  0.08,  0.08
  up 4+06:20:31  18:14:19
187 processes: 1 running, 172 sleeping, 14 zombie
CPU: % user, % nice, % system, % interrupt, % idle
Mem: 4064M Active, 8111M Inact, 2014M Wired, 322M Cache, 1645M Buf, 1106M Free
Swap: 8000M Total, 608K Used, 7999M Free

hw.physmem: 17144205312
hw.usermem: 15028662272
hw.realmem: 17985175552

top (excluding Buf) amounts to 15617 Megs while physmem shows as 16349 Megs

but as i said i run 8.3 on AMD64 and pgsql 9.2.1

On Δευ 05 Νοε 2012 16:11:39 Frank Broniewski wrote:
 Hi,
 
 I just add the different memory values together (minus the buffers). 
 Usually this sums up (+/-) to the installed memory size, at least on my 
 other machines. I found a thread similar to my problem here [1], but no 
 solution. I don't mind top showing false values, but if there's a larger 
 problem behind this, then I really want to solve it.
 
 Top is really just an indicator for this issue, it's also visible in my 
 munin stats [2]
 
 Below is a output _without_ postgresql running:
 Mem: 59M Active, 17G Inact, 3953M Wired, 1325M Cache, 3283M Buf, 8663M Free
 Swap: 4096M Total, 828K Used, 4095M Free
 
 
 and this is after a few hours of running:
 
 Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
 Swap: 4096M Total, 828K Used, 4095M Free
 
 some memory related sysctl values:
 hw.realmem: 34879832064
 hw.physmem: 34322804736
 hw.usermem: 30161108992
 
 # sysctl vm.vmtotal
 vm.vmtotal:
 System wide totals computed every five seconds: (values in kilobytes)
 ===
 Processes:(RUNQ: 1 Disk Wait: 0 Page Wait: 0 Sleep: 70)
 Virtual Memory:   (Total: 1084659688K Active: 10400940K)
 Real Memory:  (Total: 1616176K Active: 1349052K)
 Shared Virtual Memory:(Total: 60840K Active: 14132K)
 Shared Real Memory:   (Total: 11644K Active: 8388K)
 Free Memory Pages:7263972K
 
 
 [1] 
 http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/061247.html
 [2] 
 http://www.gis-hosting.lu/monitor/munin/metrico/bilbo.metrico/memory.html
 
 
 Am 2012-11-05 15:21, schrieb Achilleas Mantzios:
  How do you measure that smth is missing from top? What values do you add?
  I am currently running 8.3 but we shouldn't be so far apart top-wise.
  What is the reading under SIZE and RES in top for all postgresql processes?
  Take note that shared mem should be recorded for each and every postmaster 
  running.
 
  On Δευ 05 Νοε 2012 14:36:44 Frank Broniewski wrote:
  Hi,
 
  thank you for your feedback. I had a look at those commands and their
  output, especially in conjunction with the SEGSZ value from icps  -am
 
  Here's an example output:
  # ipcs -am
  Shared Memory:
  T   ID  KEY MODEOWNERGROUPCREATOR
  CGROUP NATTCHSEGSZ CPID LPID ATIME
  DTIMECTIME
  m   262144  5432001 --rw--- pgsqlpgsqlpgsqlpgsql
 12   88139939844551245512 13:49:28
  14:31:29 13:49:28
 
  but frankly this tells me nothing. I can tell that the value SEGSZ is
  right from the start 8813993984 and it doesn't change anymore. The only
  value that changes is the NATTCH value, I observed a range from 8 to 36
  there. I agree that the SEGSZ value matches the 8GB shared buffer, but
  how can I make the connection of my 5GB missing in top? I wonder if this
  might be the maintenance_work_mem, which is set to 4GB?
 
  Many thanks,
 
  Frank
 
  Am 2012-11-05 12:14, schrieb Achilleas Mantzios:
 
  ipcs in FreeBSD is a little ... tricky.
 
  ipcs -M
  ipcs -m
  ipcs -am
 
  could be your friends
 
  On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
  Hi,
 
  I am running a PostgreSQL server on FreeBSD. The system has 32GB memory.
  Usually I use top to examine the memory usage of the system. After a
  while, a part, approximately 5GB, vanish from top, so that the memory
  rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again
  available, but then it's already slightly decreasing. It's a standalone
  database server. It has an OpenStreetMap world database running with
  353GB data (with indices).
 
  Some system information:
  # uname -r
  9.0-RELEASE-p3
  # pg_ctl --version
  pg_ctl (PostgreSQL) 9.1.6
 
  # cat /boot/loader.conf
  ...
  

Re: [GENERAL] Memory issues

2012-10-14 Thread Shiran Kleiderman
Hi
I've returned the memory configs to the default, erased data from my db and
am testing the system again.

This is the output of *cat /proc/meminfo*
Thanks

root@ip-10-194-167-240:~# cat /proc/meminfo
MemTotal:7629508 kB
MemFree:  170368 kB
Buffers:   10272 kB
Cached:  6220848 kB
SwapCached:0 kB
Active:  3249748 kB
Inactive:3936960 kB
Active(anon): 971336 kB
Inactive(anon):  2103844 kB
Active(file):2278412 kB
Inactive(file):  1833116 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:524284 kB
SwapFree: 522716 kB
Dirty: 83068 kB
Writeback:  3080 kB
AnonPages:955856 kB
Mapped:  2132564 kB
Shmem:   2119424 kB
Slab: 157200 kB
SReclaimable: 144488 kB
SUnreclaim:12712 kB
KernelStack:1184 kB
PageTables:21092 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 4339036 kB
Committed_AS:3637424 kB
VmallocTotal:   34359738367 kB
VmallocUsed:   26152 kB
VmallocChunk:   34359710052 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k: 7872512 kB
DirectMap2M:   0 kB


On Thu, Sep 27, 2012 at 8:59 AM, Andres Freund and...@2ndquadrant.comwrote:

 On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:
  Hi,
  I'm using and Amazon ec2 instance with the following spec and the
  application that I'm running uses a postgres DB 9.1.
  The app has 3 main cron jobs.
 
  *Ubuntu 12, High-Memory Extra Large Instance
  17.1 GB of memory
  6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
  420 GB of instance storage
  64-bit platform*
 
  I've changed the main default values under file *postgresql.conf* to:
  shared_buffers = 4GB
  work_mem = 16MB
  wal_buffers = 16MB
  checkpoint_segments = 32
  effective_cache_size = 8GB
 
  When I run the app, after an hour or two, free -m looks like below ans
 the
  crons can't run due to memory loss or similar (i'm new to postgres and db
  admin).
  Thanks!
 
  free -m, errors:
 
  total used free shared buffers cached
  Mem: 17079 13742 3337 0 64 11882
  -/+ buffers/cache: 1796 15283
  Swap: 511 0 511
 
  total used *free* shared buffers cached
  Mem: 17079 16833 *245 *0 42 14583
  -/+ buffers/cache: 2207 14871
  Swap: 511 0 511
 
  **free above stays low even when nothing is running.
 
  **errors:
  *DBI connect('database=---;host=localhost','postgres',...) failed: could
  not fork new process for connection: Cannot allocate memory*
  could not fork new process for connection: Cannot allocate memory
 
  and
  execute failed: ERROR: out of memory
  DETAIL: Failed on request of size 968. [for Statement 
  SELECT DISTINCT
 could you show cat /proc/meminfo?

 Greetings,

 Andres
 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services




-- 
Best,
Shiran Kleiderman
+972  - 542380838
Skype - shirank1


Re: [GENERAL] Memory issues

2012-10-14 Thread Shiran Kleiderman
Hi
This is the output of meminfo when the system is under some stress.
Thanks

cif@ip-10-194-167-240:/tmp$ cat /proc/meminfo
MemTotal:7629508 kB
MemFree:   37820 kB
Buffers:2108 kB
Cached:  5500200 kB
SwapCached:  332 kB
Active:  4172020 kB
Inactive:3166244 kB
Active(anon):1864040 kB
Inactive(anon):  1568760 kB
Active(file):2307980 kB
Inactive(file):  1597484 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:524284 kB
SwapFree:  0 kB
Dirty: 23336 kB
Writeback: 0 kB
AnonPages:   1835716 kB
Mapped:  1610460 kB
Shmem:   1596916 kB
Slab: 136168 kB
SReclaimable: 123820 kB
SUnreclaim:12348 kB
KernelStack:1176 kB
PageTables:23148 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 4339036 kB
Committed_AS:4517524 kB
VmallocTotal:   34359738367 kB
VmallocUsed:   26152 kB
VmallocChunk:   34359710052 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k: 7872512 kB
DirectMap2M:   0 kB


On Mon, Oct 15, 2012 at 2:45 AM, Shiran Kleiderman shira...@gmail.comwrote:

 Hi
 I've returned the memory configs to the default, erased data from my db
 and am testing the system again.

 This is the output of *cat /proc/meminfo*
 Thanks

 root@ip-10-194-167-240:~# cat /proc/meminfo
 MemTotal:7629508 kB
 MemFree:  170368 kB
 Buffers:   10272 kB
 Cached:  6220848 kB
 SwapCached:0 kB
 Active:  3249748 kB
 Inactive:3936960 kB
 Active(anon): 971336 kB
 Inactive(anon):  2103844 kB
 Active(file):2278412 kB
 Inactive(file):  1833116 kB
 Unevictable:   0 kB
 Mlocked:   0 kB
 SwapTotal:524284 kB
 SwapFree: 522716 kB
 Dirty: 83068 kB
 Writeback:  3080 kB
 AnonPages:955856 kB
 Mapped:  2132564 kB
 Shmem:   2119424 kB
 Slab: 157200 kB
 SReclaimable: 144488 kB
 SUnreclaim:12712 kB
 KernelStack:1184 kB
 PageTables:21092 kB
 NFS_Unstable:  0 kB
 Bounce:0 kB
 WritebackTmp:  0 kB
 CommitLimit: 4339036 kB
 Committed_AS:3637424 kB
 VmallocTotal:   34359738367 kB
 VmallocUsed:   26152 kB
 VmallocChunk:   34359710052 kB
 HardwareCorrupted: 0 kB
 AnonHugePages: 0 kB
 HugePages_Total:   0
 HugePages_Free:0
 HugePages_Rsvd:0
 HugePages_Surp:0
 Hugepagesize:   2048 kB
 DirectMap4k: 7872512 kB
 DirectMap2M:   0 kB


 On Thu, Sep 27, 2012 at 8:59 AM, Andres Freund and...@2ndquadrant.comwrote:

 On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:
  Hi,
  I'm using and Amazon ec2 instance with the following spec and the
  application that I'm running uses a postgres DB 9.1.
  The app has 3 main cron jobs.
 
  *Ubuntu 12, High-Memory Extra Large Instance
  17.1 GB of memory
  6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
  420 GB of instance storage
  64-bit platform*
 
  I've changed the main default values under file *postgresql.conf* to:
  shared_buffers = 4GB
  work_mem = 16MB
  wal_buffers = 16MB
  checkpoint_segments = 32
  effective_cache_size = 8GB
 
  When I run the app, after an hour or two, free -m looks like below ans
 the
  crons can't run due to memory loss or similar (i'm new to postgres and
 db
  admin).
  Thanks!
 
  free -m, errors:
 
  total used free shared buffers cached
  Mem: 17079 13742 3337 0 64 11882
  -/+ buffers/cache: 1796 15283
  Swap: 511 0 511
 
  total used *free* shared buffers cached
  Mem: 17079 16833 *245 *0 42 14583
  -/+ buffers/cache: 2207 14871
  Swap: 511 0 511
 
  **free above stays low even when nothing is running.
 
  **errors:
  *DBI connect('database=---;host=localhost','postgres',...) failed: could
  not fork new process for connection: Cannot allocate memory*
  could not fork new process for connection: Cannot allocate memory
 
  and
  execute failed: ERROR: out of memory
  DETAIL: Failed on request of size 968. [for Statement 
  SELECT DISTINCT
 could you show cat /proc/meminfo?

 Greetings,

 Andres
 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services




 --
 Best,
 Shiran Kleiderman
 +972  - 542380838
 Skype - shirank1




-- 
Best,
Shiran Kleiderman
+972  - 542380838
Skype - shirank1


Re: [GENERAL] Memory issues

2012-09-27 Thread Andres Freund
On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:
 Hi,
 I'm using and Amazon ec2 instance with the following spec and the
 application that I'm running uses a postgres DB 9.1.
 The app has 3 main cron jobs.
 
 *Ubuntu 12, High-Memory Extra Large Instance
 17.1 GB of memory
 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
 420 GB of instance storage
 64-bit platform*
 
 I've changed the main default values under file *postgresql.conf* to:
 shared_buffers = 4GB
 work_mem = 16MB
 wal_buffers = 16MB
 checkpoint_segments = 32
 effective_cache_size = 8GB
 
 When I run the app, after an hour or two, free -m looks like below ans the
 crons can't run due to memory loss or similar (i'm new to postgres and db
 admin).
 Thanks!
 
 free -m, errors:
 
 total used free shared buffers cached
 Mem: 17079 13742 3337 0 64 11882
 -/+ buffers/cache: 1796 15283
 Swap: 511 0 511
 
 total used *free* shared buffers cached
 Mem: 17079 16833 *245 *0 42 14583
 -/+ buffers/cache: 2207 14871
 Swap: 511 0 511
 
 **free above stays low even when nothing is running.
 
 **errors:
 *DBI connect('database=---;host=localhost','postgres',...) failed: could
 not fork new process for connection: Cannot allocate memory*
 could not fork new process for connection: Cannot allocate memory
 
 and
 execute failed: ERROR: out of memory
 DETAIL: Failed on request of size 968. [for Statement 
 SELECT DISTINCT
could you show cat /proc/meminfo?

Greetings,

Andres
-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issues

2012-09-26 Thread Scott Marlowe
On Tue, Sep 25, 2012 at 7:00 PM, Shiran Kleiderman shira...@gmail.com wrote:

 Hi
 Thanks for your answer.
 I understood that the server is ok memory wise.
 What can I check on the client side or the DB queries?

Well you're connecting to localhost so I'd expect you to show a memory
issue in free I'm not seeing.  Are you really connecting to localhost
or not?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issues

2012-09-26 Thread Shiran Kleiderman
Hi
Thanks again.
Right now, this is *free -m and ps aux* and non of the crons can run -
can't allocate memory.

cif@domU-12-31-39-08-06-20:~$ free -m
 total   used   free sharedbuffers cached
Mem: 17079  12051   5028  0270   9578
-/+ buffers/cache:   2202  14877
Swap:  511  0511


cif@domU-12-31-39-08-06-20:~$ ps aux
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root 1  0.0  0.0  24316  2280 ?Ss   Sep24   0:00 /sbin/init
root 2  0.0  0.0  0 0 ?SSep24   0:00 [kthreadd]
root 3  0.0  0.0  0 0 ?SSep24   0:00
[ksoftirqd/0]
root 4  0.0  0.0  0 0 ?SSep24   0:00
[kworker/0:0]
root 5  0.0  0.0  0 0 ?SSep24   0:00
[kworker/u:0]
root 6  0.0  0.0  0 0 ?SSep24   0:00
[migration/0]
root 7  0.0  0.0  0 0 ?SSep24   0:00
[watchdog/0]
root 8  0.0  0.0  0 0 ?SSep24   0:00
[migration/1]
root 9  0.0  0.0  0 0 ?SSep24   0:00
[kworker/1:0]
root10  0.0  0.0  0 0 ?SSep24   0:01
[ksoftirqd/1]
root11  0.0  0.0  0 0 ?SSep24   0:00
[watchdog/1]
root12  0.0  0.0  0 0 ?S   Sep24   0:00 [cpuset]
root13  0.0  0.0  0 0 ?S   Sep24   0:00 [khelper]
root14  0.0  0.0  0 0 ?SSep24   0:00 [kdevtmpfs]
root15  0.0  0.0  0 0 ?S   Sep24   0:00 [netns]
root16  0.0  0.0  0 0 ?SSep24   0:00
[kworker/u:1]
root17  0.0  0.0  0 0 ?SSep24   0:00 [xenwatch]
root18  0.0  0.0  0 0 ?SSep24   0:00 [xenbus]
root19  0.0  0.0  0 0 ?SSep24   0:00
[sync_supers]
root20  0.0  0.0  0 0 ?SSep24   0:00
[bdi-default]
root21  0.0  0.0  0 0 ?S   Sep24   0:00
[kintegrityd]
root22  0.0  0.0  0 0 ?S   Sep24   0:00 [kblockd]
root23  0.0  0.0  0 0 ?S   Sep24   0:00 [ata_sff]
root24  0.0  0.0  0 0 ?SSep24   0:00 [khubd]
root25  0.0  0.0  0 0 ?S   Sep24   0:00 [md]
root26  0.0  0.0  0 0 ?SSep24   0:02
[kworker/0:1]
root28  0.0  0.0  0 0 ?SSep24   0:00
[khungtaskd]
root29  0.0  0.0  0 0 ?SSep24   0:00 [kswapd0]
root30  0.0  0.0  0 0 ?SN   Sep24   0:00 [ksmd]
root31  0.0  0.0  0 0 ?SSep24   0:00
[fsnotify_mark]
root32  0.0  0.0  0 0 ?SSep24   0:00
[ecryptfs-kthrea]
root33  0.0  0.0  0 0 ?S   Sep24   0:00 [crypto]
root41  0.0  0.0  0 0 ?S   Sep24   0:00 [kthrotld]
root42  0.0  0.0  0 0 ?SSep24   0:00 [khvcd]
root43  0.0  0.0  0 0 ?SSep24   0:01
[kworker/1:1]
root62  0.0  0.0  0 0 ?S   Sep24   0:00
[devfreq_wq]
root   176  0.0  0.0  0 0 ?S   Sep24   0:00 [kdmflush]
root   187  0.0  0.0  0 0 ?SSep24   0:01
[jbd2/xvda1-8]
root   188  0.0  0.0  0 0 ?S   Sep24   0:00
[ext4-dio-unwrit]
root   258  0.0  0.0  17224   640 ?SSep24   0:00
upstart-udev-bridge --daemon
root   265  0.0  0.0  21460  1196 ?Ss   Sep24   0:00
/sbin/udevd --daemon
root   328  0.0  0.0  21456   712 ?SSep24   0:00
/sbin/udevd --daemon
root   329  0.0  0.0  21456   716 ?SSep24   0:00
/sbin/udevd --daemon
root   389  0.0  0.0  15180   392 ?SSep24   0:00
upstart-socket-bridge --daemon
root   419  0.0  0.0   7256  1008 ?Ss   Sep24   0:00 dhclient3
-e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf
/var/lib/dhcp/dhclient.eth0.leases -1 eth
root   574  0.0  0.0  0 0 ?SSep24   0:03
[jbd2/dm-0-8]
root   575  0.0  0.0  0 0 ?S   Sep24   0:00
[ext4-dio-unwrit]
root   610  0.0  0.0  49948  2880 ?Ss   Sep24   0:00
/usr/sbin/sshd -D
syslog 625  0.0  0.0 253708  1552 ?Sl   Sep24   0:11 rsyslogd
-c5
102630  0.0  0.0  23808   944 ?Ss   Sep24   0:00
dbus-daemon --system --fork --activation=upstart
root   687  0.0  0.0  14496   968 tty4 Ss+  Sep24   0:00
/sbin/getty -8 38400 tty4
root   696  0.0  0.0  14496   972 tty5 Ss+  Sep24   0:00
/sbin/getty -8 38400 tty5
root   708  0.0  0.0  14496   968 tty2 Ss+  Sep24   0:00
/sbin/getty -8 38400 tty2
root   710  0.0  0.0  14496   964 tty3 Ss+  Sep24   0:00
/sbin/getty -8 38400 tty3
root   715  0.0  0.0  14496   968 tty6 Ss+  Sep24   0:00
/sbin/getty -8 38400 tty6
root 

Re: [GENERAL] Memory issues

2012-09-25 Thread Shiran Kleiderman
Hi
Thanks for your answer.
I understood that the server is ok memory wise.
What can I check on the client side or the DB queries?

Thank u.
On Wed, Sep 26, 2012 at 2:56 AM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman shira...@gmail.com
 wrote:
 
 
  Hi,
  I'm using and Amazon ec2 instance with the following spec and the
  application that I'm running uses a postgres DB 9.1.
  The app has 3 main cron jobs.
 
  Ubuntu 12, High-Memory Extra Large Instance
  17.1 GB of memory
  6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
  420 GB of instance storage
  64-bit platform
 
  I've changed the main default values under file postgresql.conf to:
  shared_buffers = 4GB
  work_mem = 16MB
  wal_buffers = 16MB
  checkpoint_segments = 32
  effective_cache_size = 8GB
 
  When I run the app, after an hour or two, free -m looks like below ans
 the
  crons can't run due to memory loss or similar (i'm new to postgres and db
  admin).
  Thanks!
 
  free -m, errors:
 
  total used free shared buffers cached
  Mem: 17079 13742 3337 0 64 11882
  -/+ buffers/cache: 1796 15283
  Swap: 511 0 511

 You have 11.8G cached, that's basically free memory on demand.

  total used free shared buffers cached
  Mem: 17079 16833 245 0 42 14583
  -/+ buffers/cache: 2207 14871
  Swap: 511 0 511

 Here you have 14.5G cached, again that's free memory so to speak.
 I.e. when something needs it it gets allocated.

  **free above stays low even when nothing is running.
 
 
  **errors:
  DBI connect('database=---;host=localhost','postgres',...) failed: could
 not
  fork new process for connection: Cannot allocate memory
  could not fork new process for connection: Cannot allocate memory

 This error is happening in your client process.  Maybe it's 32 bit or
 something and running out of local memory in its process space? Maybe
 memory is so fragmented that no large blocks can get allocated or
 something?  Either way, your machine has plenty of memory according to
 free.  BTW, it's pretty common for folks new to unix to mis-read free
 and not realize that cached memory + free memory is what's really
 available.




-- 
Best,
Shiran Kleiderman
+972  - 542380838
Skype - shirank1


Re: [GENERAL] Memory issues

2012-09-25 Thread Scott Marlowe
On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman shira...@gmail.com wrote:


 Hi,
 I'm using and Amazon ec2 instance with the following spec and the
 application that I'm running uses a postgres DB 9.1.
 The app has 3 main cron jobs.

 Ubuntu 12, High-Memory Extra Large Instance
 17.1 GB of memory
 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
 420 GB of instance storage
 64-bit platform

 I've changed the main default values under file postgresql.conf to:
 shared_buffers = 4GB
 work_mem = 16MB
 wal_buffers = 16MB
 checkpoint_segments = 32
 effective_cache_size = 8GB

 When I run the app, after an hour or two, free -m looks like below ans the
 crons can't run due to memory loss or similar (i'm new to postgres and db
 admin).
 Thanks!

 free -m, errors:

 total used free shared buffers cached
 Mem: 17079 13742 3337 0 64 11882
 -/+ buffers/cache: 1796 15283
 Swap: 511 0 511

You have 11.8G cached, that's basically free memory on demand.

 total used free shared buffers cached
 Mem: 17079 16833 245 0 42 14583
 -/+ buffers/cache: 2207 14871
 Swap: 511 0 511

Here you have 14.5G cached, again that's free memory so to speak.
I.e. when something needs it it gets allocated.

 **free above stays low even when nothing is running.


 **errors:
 DBI connect('database=---;host=localhost','postgres',...) failed: could not
 fork new process for connection: Cannot allocate memory
 could not fork new process for connection: Cannot allocate memory

This error is happening in your client process.  Maybe it's 32 bit or
something and running out of local memory in its process space? Maybe
memory is so fragmented that no large blocks can get allocated or
something?  Either way, your machine has plenty of memory according to
free.  BTW, it's pretty common for folks new to unix to mis-read free
and not realize that cached memory + free memory is what's really
available.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory error in user-defined aggregation function

2012-08-07 Thread Tom Lane
Adriaan Joubert adriaan.joub...@gmail.com writes:
 I've implemented an aggregation function to compute quartiles in C
 borrowing liberally from orafce code. I uses this code in a windowing
 context and it worked fine until today - and I'm not sure what
 changed. This is on 9.1.2 and I have also tried it on 9.1.4.

Hm, it doesn't look very different from what's done in e.g. array_agg.
You do have the transition datatype declared as internal, no?

I notice that your transition function is sloppy about returning a null
pointer (as opposed to a SQL null) if both input arguments are null.
If all the aggregated values were nulls then the null pointer would
reach the final function and cause a crash similar to the one described.
But that's hardly not data specific.

 I've recompiled postgres with debugging enabled and have connected to
 the backend with gdb, but haven't had any joy in persuading gdb to
 actually stop in the correct file so that I can step through. I'll
 keep on trying to make some headway with that.

I've found that gdb takes extra persuasion to notice shared libraries
that are loaded after it attaches to the process.  Usually the path
of least resistance is to ensure that the library is loaded before
you attach.  Use LOAD, or just CREATE OR REPLACE one of the functions
in your library.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory error in user-defined aggregation function

2012-08-07 Thread Adriaan Joubert
Hi,

Finally got this running under the debugger and figured out what is
going on. I had been under the impression that

 if (PG_ARGISNULL(0))
 PG_RETURN_NULL();

 state = (quartile_state *) PG_GETARG_POINTER(0);

would ensure that state was never a null pointer. However this is not
the case, and an additional check for state==0x0 solved the problem.
Somewhat unexpected, I have to say.

I would still be interested in any ways in which this implementation
could be improved. It would be good if there were some model
implementations for this type of thing - without orafce to guide me I
would have had a hard time figuring any of this out from the docs. I'd
gladly make the quartile implementation available for this purpose if
there is interest.

Adriaan


On 7 August 2012 15:04, Adriaan Joubert adriaan.joub...@gmail.com wrote:
 Hi,

 I've implemented an aggregation function to compute quartiles in C
 borrowing liberally from orafce code. I uses this code in a windowing
 context and it worked fine until today - and I'm not sure what
 changed. This is on 9.1.2 and I have also tried it on 9.1.4.

 What I have determined so far (by sprinkling a lot of elog's
 throughout the code) is that it does not seem to be data specific,
 although it seems to depend on the number of aggregations I do (up to
 about 1250 seems to be fine, beyond that it chokes). I also
 established that there does not seem to be a problem with the transfer
 function, and the data is accumulated without any issues. The error I
 see is in the call to first_quartile_final (listed below). The pointer
 to the transfer data structure is not null, but accessing the field
 mstate-nelems causes a segflt. So the transfer data structure pointer
 is bogus.

 I've recompiled postgres with debugging enabled and have connected to
 the backend with gdb, but haven't had any joy in persuading gdb to
 actually stop in the correct file so that I can step through. I'll
 keep on trying to make some headway with that.

 In the meantime I would appreciate any comments as to whether the
 approach taken is the right one, and whether additional checks can be
 inserted to avoid this segmentation faults.

 Many thanks,

 Adriaan


 My transfer data structure is

 typedef struct
 {
   int len; /* allocated length */
   int nextlen; /* next allocated length */
   int nelems; /* number of valid entries */
   float8  *values;
 } quartile_state;

 On the first call to the aggregate function this data structure is
 allocated as follows:

 static quartile_state *
 quartile_accummulate(quartile_state *mstate, float8 value,
 MemoryContext aggcontext)
 {
 MemoryContext oldcontext;

 if (mstate == NULL)
 {
 /* First call - initialize */
 oldcontext = MemoryContextSwitchTo(aggcontext);
 mstate = palloc(sizeof(quartile_state));
 mstate-len = 512;
 mstate-nextlen = 2 * 512;
 mstate-nelems = 0;
 mstate-values = palloc(mstate-len * sizeof(float8));
 MemoryContextSwitchTo(oldcontext);
 }
 else
 {
 if (mstate-nelems = mstate-len)
 {
 int newlen = mstate-nextlen;

 oldcontext = MemoryContextSwitchTo(aggcontext);
 mstate-nextlen += mstate-len;
 mstate-len = newlen;
 mstate-values = repalloc(mstate-values, mstate-len 
 * sizeof(float8));
 MemoryContextSwitchTo(oldcontext);
 }
 }

 mstate-values[mstate-nelems++] = value;

 return mstate;
 }


 And the transfer function itself is

 PG_FUNCTION_INFO_V1(quartile_transfer);
 Datum
 quartile_transfer(PG_FUNCTION_ARGS) {
 MemoryContext   aggcontext;
 quartile_state *state = NULL;
 float8 elem;

 if (!AggCheckCallContext(fcinfo, aggcontext))
 {
 elog(ERROR, quartile_transform called in non-aggregate 
 context);
 }

 state = PG_ARGISNULL(0) ? NULL : (quartile_state *) 
 PG_GETARG_POINTER(0);
 if (PG_ARGISNULL(1))
 PG_RETURN_POINTER(state);

 elem = PG_GETARG_FLOAT8(1);

 state = quartile_accummulate(state, elem, aggcontext);

 PG_RETURN_POINTER(state);
 }

 The final function for the computation of the first quartile is

 PG_FUNCTION_INFO_V1(first_quartile_final);
 Datum
 first_quartile_final(PG_FUNCTION_ARGS) {
 quartile_state *state = NULL;
 float8 result;

 if (PG_ARGISNULL(0))
 PG_RETURN_NULL();

 state = (quartile_state *) PG_GETARG_POINTER(0);

 /** HERE state-nelems causes a segflt */
 if (state-nelems4)
 PG_RETURN_NULL();

 result = quartile_result(state, 0.25);

 PG_RETURN_FLOAT8(result);
 }

-- 
Sent via 

Re: [GENERAL] Memory Overcommit

2012-06-07 Thread Tom Lane
Andy Chambers achamb...@mcna.net writes:
 We've just run into the dreaded OOM Killer.  I see that on Linux
 2.6, it's recommended to turn off memory overcommit.  I'm trying to
 understand the implications of doing this.  The interweb says this
 means that forking servers can't make use of copy on write
 semantics.  Is this true?

Don't know where you read that, but it's nonsense AFAIK.

The actual issue here is that when a process fork()s, initially the
child shares all the pages of the parent process.  Over time, both the
child and the parent will dirty pages that had been shared, forcing a
copy-on-write to happen, after which there's a separate copy of such
pages for each process.  So if the parent had N pages, the ultimate
memory requirement will be for something between N and 2N pages, and
there's not a very good way to know in advance what it will be.

Now the problem the kernel has is, what if a COW needs to happen and it
has noplace to put the new page?  It cannot report an ENOMEM failure
because the process is not making a failable kernel call, it's just
writing some memory that it has every reason to think it can write.
About all the kernel can do is terminate that process, ie, OOM kill.

The only way to be certain an OOM kill cannot happen is if you reserve N
pages worth of memory/swap space for the child process when you do the
fork (since then you can fail the fork call, if there's not that much
available).  You can still do COW rather than physically duplicating the
whole address space right away, but you have to bank enough spare
space to be sure there will be room when and if the time comes.

Overcommit simply means that the kernel doesn't do such conservative
advance reservation, and so it might be forced into an OOM kill.

The downside of turning off overcommit is that you will have pretty
severe under-utilization of your memory, since in practice a lot of a
process's address space is read-only and can be shared indefinitely by
parent and child.  This can usually be alleviated by providing a lot of
swap space that you expect won't get used.  Of course, if your tuning
calculations are off and the swap does start getting used a lot,
performance goes to hell in a handbasket.  So it's a tradeoff --- do you
want to keep running but possibly slowly, or are you willing to cope
with OOM kills for better average utilization of your hardware?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Management in pqlib, Garbage Collection support

2012-05-03 Thread Peter Bex
On Thu, May 03, 2012 at 09:08:53AM +0200, Alexander Reichstadt wrote:
 
 Hi,
 
 since I got no answer so far I searched through the docu again. I searched 
 for GC as well as Garbage, and all garbage refers to is with regard to 
 vacuuming a database. But my question refers to wether or not memory 
 management is with garbage collection supported or not. When I try to link 
 against pqlib, it seems I need to have garbage collection enabled in order 
 for it to link. But the documentation mentions it nowhere.

What kind of garbage collection do you need to have enabled in order to link?

C is based on manual memory-management, and you can't generally have
garbage-collection in it.  Hence, libpq is *not* garbage-collected and
your statement that it needs garbage collection enabled doesn't make
much sense to me.

Are you talking about libpq bindings to some other language, perhaps?

 Please, can someone confirm this or is this the wrong list?

This is the right list.

Cheers,
Peter
-- 
http://sjamaan.ath.cx
--
The process of preparing programs for a digital computer
 is especially attractive, not only because it can be economically
 and scientifically rewarding, but also because it can be an aesthetic
 experience much like composing poetry or music.
-- Donald Knuth

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Management in pqlib, Garbage Collection support

2012-05-03 Thread John R Pierce

On 05/03/12 12:08 AM, Alexander Reichstadt wrote:

since I got no answer so far I searched through the docu again. I searched for 
GC as well as Garbage, and all garbage refers to is with regard to vacuuming a 
database. But my question refers to wether or not memory management is with 
garbage collection supported or not. When I try to link against pqlib, it seems 
I need to have garbage collection enabled in order for it to link. But the 
documentation mentions it nowhere.


what is this garbage collection you are talking about having to 
enable?I'm not aware of any such options in C, and this is a C API.  
not Java.  not Python.  not C++.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Management in pqlib, Garbage Collection support

2012-05-03 Thread Alexander Reichstadt
Thanks, that's answering my question. In Objective-C as well as many other 
languages there is the feature to turn on Garbage Collection. It's a separate 
thread that scans memory for strong pointers, their source and origin and 
vacuums memory so to not have any leaks. Anything unreferenced and no longer 
needed is cleaned up automatically. There are some border cases where GC can 
fail, but for most it works.

There are three values for the compiler: off, supported and required. One 
cannot mix subprojects unlike in e.g. ARC (Automated Reference Counting). 
Either it's on or off. Supported means that it covers both sides, manual 
memory management and automated management.

As GC is an evolutionary stage across languages, objects formerly deallocated 
or freed or released (or whatever other term is used across different 
languages) through a unique call or method name where one would normally also 
host any steps to release associated resources not necessarily memory related 
(closing sockets, file handles etc.), such call is replaced with the language 
specific call that the garbage collector calls to still provide for closing 
these resources while still being able to handle memory management in an 
automated fashion. All directly memory related calls become no-ops to provide 
for backward compatibility. In the end one can compile with GC on, or GC off 
and the compiler refers to the calls it needs to deal with the two aspects 
accordingly, either resource-only management or resource-and-memory management.

In general there are libs that provide garbage collection for C as well, like 
here:
http://www.hpl.hp.com/personal/Hans_Boehm/gc/

For example, it'd help avoid leaks like those caused by a result not being 
PQclear'ed.

Alex


Am 03.05.2012 um 09:16 schrieb Peter Bex:

 On Thu, May 03, 2012 at 09:08:53AM +0200, Alexander Reichstadt wrote:
 
 Hi,
 
 since I got no answer so far I searched through the docu again. I searched 
 for GC as well as Garbage, and all garbage refers to is with regard to 
 vacuuming a database. But my question refers to wether or not memory 
 management is with garbage collection supported or not. When I try to link 
 against pqlib, it seems I need to have garbage collection enabled in order 
 for it to link. But the documentation mentions it nowhere.
 
 What kind of garbage collection do you need to have enabled in order to link?
 
 C is based on manual memory-management, and you can't generally have
 garbage-collection in it.  Hence, libpq is *not* garbage-collected and
 your statement that it needs garbage collection enabled doesn't make
 much sense to me.
 
 Are you talking about libpq bindings to some other language, perhaps?
 
 Please, can someone confirm this or is this the wrong list?
 
 This is the right list.
 
 Cheers,
 Peter
 -- 
 http://sjamaan.ath.cx
 --
 The process of preparing programs for a digital computer
 is especially attractive, not only because it can be economically
 and scientifically rewarding, but also because it can be an aesthetic
 experience much like composing poetry or music.
   -- Donald Knuth
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



  1   2   3   >