Re: [GENERAL] Memory consumption for Query

2017-04-24 Thread dhaval jaiswal
>> Other operations don't really consume much memory.


Is there any way to find out that as well.


>> You can run "EXPLAIN (ANALYZE) SELECT ..." to see how much memory is used
for memory intense operations like sort, hash or materialize.

I am aware of it.





Sent from Outlook



From: Albe Laurenz 
Sent: Monday, April 24, 2017 12:54 PM
To: 'dhaval jaiswal *EXTERN*'; PostgreSQL General
Subject: RE: Memory consumption for Query

dhaval jaiswal wrote:
> How to check how much memory query is consuming.
>
> Is there tool can check of query consuming memory for the execution or output.
>
> Let's say for following query how to calculate memory consumption.
>
> select * from test where id=1;

That query will not consume memory worth mention unless
"test" is a non-trivial view.

You can run "EXPLAIN (ANALYZE) SELECT ..." to see how much memory is used
for memory intense operations like sort, hash or materialize.

Other operations don't really consume much memory.

Yours,
Laurenz Albe


Re: [GENERAL] Memory consumption for Query

2017-04-24 Thread Albe Laurenz
dhaval jaiswal wrote:
> How to check how much memory query is consuming.
> 
> Is there tool can check of query consuming memory for the execution or output.
> 
> Let's say for following query how to calculate memory consumption.
> 
> select * from test where id=1;

That query will not consume memory worth mention unless
"test" is a non-trivial view.

You can run "EXPLAIN (ANALYZE) SELECT ..." to see how much memory is used
for memory intense operations like sort, hash or materialize.

Other operations don't really consume much memory.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory consumption for Query

2017-04-22 Thread dhaval jaiswal
How to check how much memory query is consuming.

Is there tool can check of query consuming memory for the execution or output.


Let's say for following query how to calculate memory consumption.


select * from test where id=1;



Sent from Outlook


Re: [GENERAL] Memory usage per session

2016-07-14 Thread AMatveev
Hi

>I think the "problem" that he is having is fixable only by changing how 
>PostgreSQL itself works.
>His problem is a PL/pgSQL function which is 11K lines in length.
>When invoked, this function is "compiled" into a large tokenized parse tree.
>This parse tree is only usable in the session which invoked the the function.
>Apparently this parse tree takes a lot of memory. And "n" concurrent users of 
>this,
>highly used, function will therefore require "n" times as much memory because 
>the parse tree is _not_ shareable.
>This is explained in:
>https://www.postgresql.org/docs/9.5/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING


There   is   an   interesting   continuation   of  the  discussion  at
pgsql-hack...@postgresql.org  ([HACKERS]  One process per session lack
of sharing)

https://www.postgresql.org/message-id/flat/301417900.20160712173911%40bitec.ru#301417900.20160712173...@bitec.ru

>I agree that there's not really a plan to implement this, but I don't
>agree that connection pooling solves the whole problem.  Most people
>can't get by with statement pooling, so in practice you are looking at
>transaction pooling or session pooling.  And that means that you can't
>really keep the pool size as small as you'd like because backends can
>be idle in transaction for long enough to force the pool size to be
>pretty large.  Also, pooling causes the same backends to get reused
>for different sessions which touch different relations and different
>functions so that, for example, the relcache and the PL/pgsql function
>caches grow until every one of those sessions has everything cached
>that any client needs.  That can cause big problems.
>
>So, I actually think it would be a good idea to think about this.  The
>problem, of course, is that as long as we allow arbitrary parts of the
>code - including extension code - to declare global variables and
>store arbitrary stuff in them without any coordination, it's
>impossible to imagine hibernating and resuming a session without a
>risk of things going severely awry.  This was a major issue for
>parallel query, but we've solved it, mostly, by designating the things
>that rely on global variables as parallel-restricted, and there
>actually aren't a ton of those.  So I think it's imaginable that we
>can get to a point where we can, at least in some circumstances, let a
>backend exit and reconstitute its state at a later time.  It's not an
>easy project, but I think it is one we will eventually need to do.
>Insisting that the current model is working is just sticking our head
>in the sand.  It's mostly working, but there are workloads where it
>fails badly - and competing database products survive a number of
>scenarios where we just fall on our face.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-11 Thread AMatveev
Hi

> On Jul 08, John McKown modulated:
> ...
>> I think the "problem" that he is having is fixable only by changing how
>> PostgreSQL itself works. His problem is a PL/pgSQL function which is
>> 11K lines in length. When invoked, this function is "compiled" into a
>> large tokenized parse tree. This parse tree is only usable in the
>> session which invoked the the function. Apparently this parse tree
>> takes a lot of memory.

It's  pleasant to see smart men, And I got here some help.
The most valuable is advice to deploy PgPool or PgBouncer.
Thanks guys. Usually, I don't answer on certain theme.
http://www.joelonsoftware.com/articles/fog18.html
But may be it helps to somebody.

> Right.  I'm not entirely sure the original poster wants to hear
> practical solutions,
Of course this answer requires "moral justification" :)

> but I think there are three ways that someone
> familar with Postgres would avoid this type of problem:

> 1. Connection pooling. Set the max connection limit and other tuning
>parameters appropriately for your workload and available
>resources. Don't get into a state as was described (thousands of
>open connections and only hundreds "active"). Make your client
>applications more disciplined.
You are joking :)
They pay us money, and they want easy and cheap decision :))
I can tell a terrible thing, but they can pay money to others :)))
> 2. Buy more RAM.  You can easily grow to 512GB in just basic
>dual-socket servers these days.  This hardware cost may well 
>be worth it to avoid human labor costs.
Thanks cap :))
http://www.reduxsquad.com/wp-content/uploads/2016/02/hotels-com-spring-break-sale-captain-obvious-workout-bathroom-large-8.jpg
> 3. Rewrite or refactor such complex stored procedures in a different
>programming language such as C or Python, so your PL/pgsql stored
>procedures remain small glue around libraries of code.  Postgres
>makes it very trivial to extend the system with such procedural
>libraries.
>
Let's read first :)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread John R Pierce

On 7/8/2016 12:00 PM, Karl Czajkowski wrote:

3. Rewrite or refactor such complex stored procedures in a different
programming language such as C or Python, so your PL/pgsql stored
procedures remain small glue around libraries of code.  Postgres
makes it very trivial to extend the system with such procedural
libraries.


4.   move said business logic to app servers to unload the database 
server from doing so much compute, and get better all around 
performance.   use plpgsql functions for inner functions only where 
there are significant performance gains.its easy to cluster app 
servers, its not so easy to parallelize database servers.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Karl Czajkowski
On Jul 08, John McKown modulated:
...
> I think the "problem" that he is having is fixable only by changing how
> PostgreSQL itself works. His problem is a PL/pgSQL function which is
> 11K lines in length. When invoked, this function is "compiled" into a
> large tokenized parse tree. This parse tree is only usable in the
> session which invoked the the function. Apparently this parse tree
> takes a lot of memory.

Right.  I'm not entirely sure the original poster wants to hear
practical solutions, but I think there are three ways that someone
familar with Postgres would avoid this type of problem:

1. Connection pooling. Set the max connection limit and other tuning
   parameters appropriately for your workload and available
   resources. Don't get into a state as was described (thousands of
   open connections and only hundreds "active"). Make your client
   applications more disciplined.

2. Buy more RAM.  You can easily grow to 512GB in just basic
   dual-socket servers these days.  This hardware cost may well 
   be worth it to avoid human labor costs.

3. Rewrite or refactor such complex stored procedures in a different
   programming language such as C or Python, so your PL/pgsql stored
   procedures remain small glue around libraries of code.  Postgres
   makes it very trivial to extend the system with such procedural
   libraries.
   

Karl


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Pavel Stehule
2016-07-08 17:49 GMT+02:00 :

> Hi
>
>
> >> Oracle: about 5M
> >> postgreSql: about 160М
>
>
>
> >The almost session memory is used for catalog caches. So you should to
> have big catalog and long living sessions.
>
> >What do you do exactly?
>
> I've generate test code that  emulates instruction tree size for our
> production code.
> This test shows:
> -What is the size of instruction tree for our typical  BP
>   it's greater than 300M for each session
> -How often do PostgreSql parse the text
>   When postgres clean cache, so much often
>

PostgreSQL parses the source code of functions once per session


>
>
> So Oracle is much better in this case.
> It's very difficult really estimate in such case, to buy Oracle or to by
> hardware.
>

Should be - The implementation of PLpgSQL and PL/SQL is strongly different.
When your case is not usual, then the migration to Postgres needs redesign
sometimes.

Regards

Pavel


Re: [GENERAL] Memory usage per session

2016-07-08 Thread John McKown
On Fri, Jul 8, 2016 at 11:26 AM, Melvin Davidson 
wrote:

>
>
> On Fri, Jul 8, 2016 at 11:49 AM,  wrote:
>
>> Hi
>>
>>
>> >> Oracle: about 5M
>> >> postgreSql: about 160М
>>
>>
>>
>> >The almost session memory is used for catalog caches. So you should to
>> have big catalog and long living sessions.
>>
>> >What do you do exactly?
>>
>> I've generate test code that  emulates instruction tree size for our
>> production code.
>> This test shows:
>> -What is the size of instruction tree for our typical  BP
>>   it's greater than 300M for each session
>> -How often do PostgreSql parse the text
>>   When postgres clean cache, so much often
>>
>> So Oracle is much better in this case.
>> It's very difficult really estimate in such case, to buy Oracle or to by
>> hardware.
>>
>>
>
> *My questions:*>What is the actual O/S that PostgreSQL is installed on?
> >How much total memory is on the server?
> >I would be very curious about the values you have  specified in
> postgresql.conf?
> > Also, what is the exact version of PostgreSQL you are using?
> >What is the total time to complete the test for all 3 DB's?
> >The best I can tell is that with all the unknowns, you are comparing
> apples to oranges.
>
> *Your answers:*
> >There is real problem for us.
> >The PL/pgSQL interpreter parses the function's source text and produces
> an internal binary instruction tree the first time the function is called
> (within each session)
>
>
> *Your answer is jibberish and has nothing to do with my questions.*
>
> *Have you even tuned the postgresql.conf?*
> *You cannot fairly compare PostgreSQL with any other database unless you
> first tune it's postgres.conf.*
>
> *Melvin Davidson*
>

I think the "problem" that he is having is fixable only by changing how
PostgreSQL itself works. His problem is a PL/pgSQL function which is 11K
lines in length. When invoked, this function is "compiled" into a large
tokenized parse tree. This parse tree is only usable in the session which
invoked the the function. Apparently this parse tree takes a lot of memory.
And "n" concurrent users of this, highly used, function will therefore
require "n" times as much memory because the parse tree is _not_
shareable.  This is explained in:
​​
https://www.postgresql.org/docs/9.5/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING

​In previous posts, he implied that he is running on some version of
Windows by referencing the VC compiler. I am _guessing_ that the other DBs
mentioned: MSSQL and Oracle implement their server side programming
differently so that it takes less memory. Perhaps by allowing the "compiled
program" to be shared between session.




-- 
"Pessimism is a admirable quality in an engineer. Pessimistic people check
their work three times, because they're sure that something won't be right.
Optimistic people check once, trust in Solis-de to keep the ship safe, then
blow everyone up."
"I think you're mistaking the word optimistic for inept."
"They've got a similar ring to my ear."

>From "Star Nomad" by Lindsay Buroker:

Maranatha! <><
John McKown


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Melvin Davidson
On Fri, Jul 8, 2016 at 11:49 AM,  wrote:

> Hi
>
>
> >> Oracle: about 5M
> >> postgreSql: about 160М
>
>
>
> >The almost session memory is used for catalog caches. So you should to
> have big catalog and long living sessions.
>
> >What do you do exactly?
>
> I've generate test code that  emulates instruction tree size for our
> production code.
> This test shows:
> -What is the size of instruction tree for our typical  BP
>   it's greater than 300M for each session
> -How often do PostgreSql parse the text
>   When postgres clean cache, so much often
>
> So Oracle is much better in this case.
> It's very difficult really estimate in such case, to buy Oracle or to by
> hardware.
>
>

*My questions:*>What is the actual O/S that PostgreSQL is installed on?
>How much total memory is on the server?
>I would be very curious about the values you have  specified in
postgresql.conf?
> Also, what is the exact version of PostgreSQL you are using?
>What is the total time to complete the test for all 3 DB's?
>The best I can tell is that with all the unknowns, you are comparing
apples to oranges.

*Your answers:*
>There is real problem for us.
>The PL/pgSQL interpreter parses the function's source text and produces an
internal binary instruction tree the first time the function is called
(within each session)


*Your answer is jibberish and has nothing to do with my questions.*

*Have you even tuned the postgresql.conf?*
*You cannot fairly compare PostgreSQL with any other database unless you
first tune it's postgres.conf.*

*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Title: Re: [GENERAL] Memory usage per session


Hi


>> Oracle: about 5M
>> postgreSql: about 160М



>The almost session memory is used for catalog caches. So you should to have big catalog and long living sessions. 

>What do you do exactly?

I've generate test code that  emulates instruction tree size for our production code.
This test shows:
-What is the size of instruction tree for our typical  BP  
  it's greater than 300M for each session 
-How often do PostgreSql parse the text 
  When postgres clean cache, so much often 

So Oracle is much better in this case. 
It's very difficult really estimate in such case, to buy Oracle or to by hardware.






Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Title: Re: [GENERAL] Memory usage per session


Hi

>> >> The test performs about 11K lines of code
>> >> Oracle: about 5M
>> >> postgreSql: about 160М


>What is the actual O/S that PostgreSQL is installed on? 
>How much total memory is on the server? 
>I would be very curious about the values you have  specified in postgresql.conf?
> Also, what is the exact version of PostgreSQL you are using?
>What is the total time to complete the test for all 3 DB's?
>The best I can tell is that with all the unknowns, you are comparing apples to oranges.

There is real problem for us.
The PL/pgSQL interpreter parses the function's source text and produces an internal binary instruction tree the first time the function is called (within each session) 

The size of this instruction tree depends only of postgreSql build.

And size is big, and size is not share between session.

I understand that nobody says: Damn One second I will implement it soon :)

Thank all for constructive answers.




Re: [GENERAL] Memory usage per session

2016-07-08 Thread Achilleas Mantzios

On 08/07/2016 17:07, amatv...@bitec.ru wrote:

Hi


The test performs about 11K lines of code
Oracle: about 5M
postgreSql: about 160М



Do you have 100 CPUs on this system which apparently doesn't have 16G
of RAM available for PG to use?

We can say at fact:
We currently  work at oracle.
Our code base about 4000 k line of code
In out last project we have:
3000 current connection
200 active session
So 16g it's very optimistic.
Of course  we think about buy hardware or software.
It's other question.
So with this memory consumption it can be really cheaper to by Oracle.

If not, you should probably consider connection pooling to reduce the
number of PG sessions to something approaching the number of CPUs/cores
you have in the system.

It's  possible only  with  application  server,


No, you can deploy PgPool or PgBouncer.
Apart from that, I just checked in my system. User sessions have size of 16M. 
Not 160M.


for local network thick client has reasonable advantages.
  We just can't implement today all functions on thin client.






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Melvin Davidson
On Fri, Jul 8, 2016 at 10:07 AM,  wrote:

> Hi
>
> >> >> The test performs about 11K lines of code
> >> >> Oracle: about 5M
> >> >> postgreSql: about 160М
>
>
> > Do you have 100 CPUs on this system which apparently doesn't have 16G
> > of RAM available for PG to use?
> We can say at fact:
> We currently  work at oracle.
> Our code base about 4000 k line of code
> In out last project we have:
> 3000 current connection
> 200 active session
> So 16g it's very optimistic.
> Of course  we think about buy hardware or software.
> It's other question.
> So with this memory consumption it can be really cheaper to by Oracle.
> > If not, you should probably consider connection pooling to reduce the
> > number of PG sessions to something approaching the number of CPUs/cores
> > you have in the system.
> It's  possible only  with  application  server,
> for local network thick client has reasonable advantages.
>  We just can't implement today all functions on thin client.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

What is the actual O/S that PostgreSQL is installed on?
How much total memory is on the server?
I would be very curious about the values you have  specified in
postgresql.conf?
 Also, what is the exact version of PostgreSQL you are using?
What is the total time to complete the test for all 3 DB's?
The best I can tell is that with all the unknowns, you are comparing apples
to oranges.

*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Title: Re: [GENERAL] Memory usage per session


Здравствуйте.

>> Oracle: about 5M
>> postgreSql: about 160М


>​I'm admittedly ignorant of this type of testing. But if the memory usage for PostgreSQL is in the server, perhaps due to caching (how to test?), then it likely would _not_ linearly scale up >as the number of clients increased because every clients >would share the same cache data within the server.​ Or are you measuring the memory usage where the client is on one machine and the >PostgreSQL server is a different machine, with the client machine getting the memory hit?

I can send zip file with test code on request;
To say the truth it's described in documentation:
https://www.postgresql.org/docs/9.5/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING
The PL/pgSQL interpreter parses the function's source text and produces an internal binary instruction tree the first time the function is called (within each session) 

It's very sad :(

The test is very simply:
Generate code:
DECLARE
  svSql "varchar";
BEGIN
  for nvi in 1..10
  loop
    svSql = 'CREATE OR REPLACE FUNCTION perfa."func'||nvi||'" (
)
RETURNS void AS
$body$
DECLARE
  svSql "varchar";
BEGIN
  svSql:=;
  PERFORM perfb."func'||(nvi-1)*10+1||'"();
  PERFORM perfb."func'||(nvi-1)*10+2||'"();
  PERFORM perfb."func'||(nvi-1)*10+3||'"();
  PERFORM perfb."func'||(nvi-1)*10+4||'"();
  PERFORM perfb."func'||(nvi-1)*10+5||'"();
  PERFORM perfb."func'||(nvi-1)*10+6||'"();
  PERFORM perfb."func'||(nvi-1)*10+7||'"();
  PERFORM perfb."func'||(nvi-1)*10+8||'"();
  PERFORM perfb."func'||(nvi-1)*10+9||'"();
  PERFORM perfb."func'||(nvi-1)*10+10||'"();         
END;
$body$
LANGUAGE ''plpgsql''
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER';
    EXECUTE svSql;
  end loop;
END;

Download jmetter
In 50 thread run this funciton.


About 4g memory will be consumed in one second 
It's hard to miss
:))






Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Hi

>> >> The test performs about 11K lines of code
>> >> Oracle: about 5M
>> >> postgreSql: about 160М


> Do you have 100 CPUs on this system which apparently doesn't have 16G
> of RAM available for PG to use?
We can say at fact:
We currently  work at oracle.
Our code base about 4000 k line of code
In out last project we have:
3000 current connection
200 active session
So 16g it's very optimistic.
Of course  we think about buy hardware or software.
It's other question.
So with this memory consumption it can be really cheaper to by Oracle.
> If not, you should probably consider connection pooling to reduce the
> number of PG sessions to something approaching the number of CPUs/cores
> you have in the system.
It's  possible only  with  application  server,
for local network thick client has reasonable advantages.
 We just can't implement today all functions on thin client.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Stephen Frost
* amatv...@bitec.ru (amatv...@bitec.ru) wrote:
> > On 08/07/2016 14:11, amatv...@bitec.ru wrote:
> >> The test performs about 11K lines of code
> >> Memory usage per session:
> >> Oracle: about 5M
> >> MSSqlServer: about 4M
> >> postgreSql: about 160М
> 
> > Visual C???
> > You will have to run PostgreSQL on a proper Unix system to test for 
> > performance.
> Of cause we understand that unix is faster( damn fork :)))
> Our Current problem is memory:
>  (160m vs 5M) * 100 sessions  = 16G vs 0.5G
> We just can get "out of memory".
> :(((

Do you have 100 CPUs on this system which apparently doesn't have 16G
of RAM available for PG to use?

If not, you should probably consider connection pooling to reduce the
number of PG sessions to something approaching the number of CPUs/cores
you have in the system.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Memory usage per session

2016-07-08 Thread John McKown
On Fri, Jul 8, 2016 at 8:16 AM,  wrote:

> Hi
>
> > On 08/07/2016 14:11, amatv...@bitec.ru wrote:
> >> Hi.
> >> The test performs about 11K lines of code
> >> Memory usage per session:
> >> Oracle: about 5M
> >> MSSqlServer: about 4M
> >> postgreSql: about 160М
>
>
> > Visual C???
> > You will have to run PostgreSQL on a proper Unix system to test for
> performance.
> Of cause we understand that unix is faster( damn fork :)))
> Our Current problem is memory:
>  (160m vs 5M) * 100 sessions  = 16G vs 0.5G
> We just can get "out of memory".
> :(((
>

​I'm admittedly ignorant of this type of testing. But if the memory usage
for PostgreSQL is in the server, perhaps due to caching (how to test?),
then it likely would _not_ linearly scale up as the number of clients
increased because every clients would share the same cache data within the
server.​ Or are you measuring the memory usage where the client is on one
machine and the PostgreSQL server is a different machine, with the client
machine getting the memory hit?

-- 
"Pessimism is a admirable quality in an engineer. Pessimistic people check
their work three times, because they're sure that something won't be right.
Optimistic people check once, trust in Solis-de to keep the ship safe, then
blow everyone up."
"I think you're mistaking the word optimistic for inept."
"They've got a similar ring to my ear."

>From "Star Nomad" by Lindsay Buroker:

Maranatha! <><
John McKown


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Pavel Stehule
Hi

2016-07-08 15:16 GMT+02:00 :

> Hi
>
> > On 08/07/2016 14:11, amatv...@bitec.ru wrote:
> >> Hi.
> >> The test performs about 11K lines of code
> >> Memory usage per session:
> >> Oracle: about 5M
> >> MSSqlServer: about 4M
> >> postgreSql: about 160М
>
>
> > Visual C???
> > You will have to run PostgreSQL on a proper Unix system to test for
> performance.
> Of cause we understand that unix is faster( damn fork :)))
> Our Current problem is memory:
>  (160m vs 5M) * 100 sessions  = 16G vs 0.5G
> We just can get "out of memory".
> :(((
>

The almost session memory is used for catalog caches. So you should to have
big catalog and long living sessions.

What do you do exactly?

Regards

Pavel

>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Hi

> On 08/07/2016 14:11, amatv...@bitec.ru wrote:
>> Hi.
>> The test performs about 11K lines of code
>> Memory usage per session:
>> Oracle: about 5M
>> MSSqlServer: about 4M
>> postgreSql: about 160М


> Visual C???
> You will have to run PostgreSQL on a proper Unix system to test for 
> performance.
Of cause we understand that unix is faster( damn fork :)))
Our Current problem is memory:
 (160m vs 5M) * 100 sessions  = 16G vs 0.5G
We just can get "out of memory".
:(((



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per session

2016-07-08 Thread Achilleas Mantzios

On 08/07/2016 14:11, amatv...@bitec.ru wrote:

Hi.
We have tested postgreSql,Oracle,MSSqlServer.
The test performs about 11K lines of code
Memory usage per session:
Oracle: about 5M
MSSqlServer: about 4M
postgreSql: about 160М
The result of postgreSql is very sad(Our typical business logic has about 30K 
lines of code).
How can I reduce memory consumption per session?
Note, we cant move the business logic to an application server as it will lead 
us to performance problem.

I can send the test script on request.

Test description:
PostgreSQL 9.5.3, compiled by Visual C build 1800, 64-bit


Visual C???
You will have to run PostgreSQL on a proper Unix system to test for performance.


At the same time we run 50 sessions that perform the following functions:
CREATE OR REPLACE FUNCTION perfa.func9
...
BEGIN
   svSql:='';
   PERFORM perfb."func91"();
   ...
END;

CREATE OR REPLACE FUNCTION perfb.func91
...
BEGIN
   PERFORM perfc."func911"();
   ...
END;

CREATE OR REPLACE FUNCTION perfc.func911 (
)
RETURNS void AS
$body$
DECLARE
   svSql BIGINT;
BEGIN
   svSql:=0;
   ...
   svSql:=svSql+10;
END;
$body$






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory usage per session

2016-07-08 Thread AMatveev
Hi.
We have tested postgreSql,Oracle,MSSqlServer.
The test performs about 11K lines of code
Memory usage per session:
Oracle: about 5M
MSSqlServer: about 4M
postgreSql: about 160М
The result of postgreSql is very sad(Our typical business logic has about 30K 
lines of code).
How can I reduce memory consumption per session?
Note, we cant move the business logic to an application server as it will lead 
us to performance problem.  

I can send the test script on request.

Test description:
PostgreSQL 9.5.3, compiled by Visual C build 1800, 64-bit
At the same time we run 50 sessions that perform the following functions:
CREATE OR REPLACE FUNCTION perfa.func9
...
BEGIN
  svSql:='';
  PERFORM perfb."func91"();
  ...  
END;

CREATE OR REPLACE FUNCTION perfb.func91 
...
BEGIN
  PERFORM perfc."func911"();
  ... 
END;

CREATE OR REPLACE FUNCTION perfc.func911 (
)
RETURNS void AS
$body$
DECLARE
  svSql BIGINT;
BEGIN
  svSql:=0;
  ...
  svSql:=svSql+10;   
END;
$body$



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] memory problem with refresh materialized view

2016-02-12 Thread Albe Laurenz
Enrico Pirozzi wrote:
> I have a new postgresql 9.5.0 installation on a new virtual server debian 8.3 
> x64 with 4gb RAM, I have
> compiled postgresql from source.
> 
> When I import a dump with materialized views I see that postgres process 
> takes about all 4 Gb and then
> I have this error
> 
>  fork: Cannot allocate memory

What are the memory related settings of the database into which you restore?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] memory problem with refresh materialized view

2016-02-11 Thread Enrico Pirozzi
Hi
I have a new postgresql 9.5.0 installation on a new virtual server debian
8.3 x64 with 4gb RAM, I have compiled postgresql from source.

When I import a dump with materialized views I see that postgres process
takes about all 4 Gb and then I have this error

 fork: Cannot allocate memory


Can anyone help me?

Thanks in advantage :)

Enrico

-- 
PostgreSQL openday Rimini 27-02-2016


Enrico Pirozzi
Tel.  +39 0861 1855771 - Mob.+39 328 4164437 - Fax  +39 0861 1850310
http://www.pgtraining.com  -
i...@pgtraining.com
www.enricopirozzi.info - i...@enricopirozzi.info
Skype sscotty71 - Gtalk sscott...@gmail.com


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-14 Thread Adrian Klaver

On 12/13/2015 01:23 PM, Gerhard Wiesinger wrote:

On 13.12.2015 21:14, Bill Moran wrote:

Wait ... this is a combined HTTP/Postgres server? You didn't mention that
earlier, and it's kind of important.

What evidence do you have that Postgres is actually the part of
this system running out of memory?


For me the complete picture doesn't look consistent.


I don't see any such evidence in any of
your emails, and (based on experience) I find it pretty likely that
whatever
is running under node is doing something in a horrifically
memory-inefficient
manner. Since you mention that you see nothing in the PG logs, that
makes it
even more likely (to me) that you're looking entirely in the wrong place.

I'd be willing to bet a steak dinner that if you put the web server on a
different server than the DB, that the memory problems would follow the
web server and not the DB server.


Changes in config:
track_activity_query_size = 102400
work_mem = 100MB

Ok, we restarted PostgreSQL and had it stopped for seconds, and logged
top every second:

When PostgreSQL was down nearly all memory was freed, looks good to me.
So it is likely that node and other processes are not the cause.
Mem: 742M Active, 358M Inact, 1420M Wired, 21G Cache, 871M Buf, 8110M Free
Swap: 512M Total, 477M Used, 35M Free, 93% Inuse

When PostgreSQL restarted, Inactive was growing fast (~1min):
Mem: 7998M Active, 18G Inact, 2763M Wired, 1766M Cache, 1889M Buf, 1041M
Free
Swap: 512M Total, 472M Used, 41M Free, 92% Inuse

After some few minutes we are back again at the same situation:
Mem: 8073M Active, 20G Inact, 2527M Wired, 817M Cache, 1677M Buf, 268M Free
Swap: 512M Total, 472M Used, 41M Free, 92% Inuse

The steak dinner is mine :-) Donating to the PostgreSQL community :-)


To me all the above proves is that this a complete system issue and only 
with all the parts running do you get a problem. It still does indicate 
which part or interaction of parts is the issue. This is further muddied 
by no description of what, if anything, you where doing in the above 
scenarios.




Any further ideas, I don't think this is normal system behaviour.

Ciao,
Gerhard






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Gerhard Wiesinger

Hello,

some further details from the original FreeBSD 10.1 machine:

Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse

  PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIME WCPU COMMAND
77941 pgsql 5  200  7925M  7296M usem2 352:34 6.98% 
postgres: username dbnamee 127.0.0.1(43367)  (postgres)
75862 pgsql 1  200  7321M  7266M select  4   7:04 0.00% 
postgres: checkpointer process(postgres)
75863 pgsql 1  200  7321M  7260M select  4   3:34 0.00% 
postgres: writer process(postgres)
75867 pgsql 1  200  7323M  7237M sbwait  0   3:19 0.00% 
postgres: username dbnamee ipaddress(58977)  (postgres)
76178 pgsql 1  200  7323M  7237M sbwait  2   3:18 0.00% 
postgres: username dbnamee ipaddress(35807)  (postgres)
75868 pgsql 1  200  7323M  7237M sbwait  0   3:20 0.00% 
postgres: username dbnamee ipaddress(58978)  (postgres)
75884 pgsql 1  200  7323M  7236M sbwait  3   3:20 0.00% 
postgres: username dbnamee ipaddress(59908)  (postgres)
75869 pgsql 1  200  7323M  7236M sbwait  3   3:20 0.00% 
postgres: username dbnamee ipaddress(58982)  (postgres)
76181 pgsql 1  200  7323M  7236M sbwait  2   3:18 0.00% 
postgres: username dbnamee ipaddress(35813)  (postgres)
75883 pgsql 1  200  7323M  7236M sbwait  0   3:20 0.00% 
postgres: username dbnamee ipaddress(59907)  (postgres)
76180 pgsql 1  200  7323M  7236M sbwait  1   3:19 0.00% 
postgres: username dbnamee ipaddress(35811)  (postgres)
76177 pgsql 1  200  7323M  7236M sbwait  1   3:18 0.00% 
postgres: username dbnamee ipaddress(35712)  (postgres)
76179 pgsql 1  200  7323M  7236M sbwait  5   3:18 0.00% 
postgres: username dbnamee ipaddress(35810)  (postgres)
64951 pgsql 1  750  7375M   662M CPU11   0:01 11.47% 
postgres: username dbnamee 127.0.0.1(32073)  (postgres)
64950 pgsql 1  770  7325M   598M CPU66   0:02 16.55% 
postgres: username dbnamee 127.0.0.1(44251)  (postgres)


Out of memory:
kernel: swap_pager_getswapspace(4): failed
kernel: swap_pager_getswapspace(8): failed
kernel: swap_pager_getswapspace(3): failed

Main issue is IHMO (as far as I understood the FreeBSD Memory system) 
that 20G are INACTIVE. When I subtract the shared memory, even ~13GB 
should be available, but they are still allocated but inactive 
(INACTIVE). INACTIVE memory might be clean or dirty. As we get into out 
of memory situations it is likely that the memory is dirty (otherwise it 
would have been reused).


Config:
Memory: 32GB, Swap: 512MB

maintenance_work_mem = 512MB
effective_cache_size = 10GB
work_mem = 892MB
wal_buffers = 8MB
checkpoint_segments = 16
shared_buffers = 7080MB
max_connections = 80
autovacuum_max_workers = 3

Thnx.

Ciao,
Gerhard


On 13.12.2015 08:49, Gerhard Wiesinger wrote:

Hello,

We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple 
worker processes connected via persistent connections to PostgreSQL, 
they perform just simple queries with SELECT on primary keys and 
simple INSERTS/UPDATES. Normally nearly all the workers are idle but 
they still consume the maximum configured work mem on the PostgreSQL 
server and the memory is also resident. If some other queries get in 
we get into out of memory situations. So it looks like PostgreSQL has 
memory leaks.


I found a test scenario to reproduce it also on a newer FreeBSD 10.2 
VM as well as in a Fedora 23 VM (both with PostgreSQL 9.4.5):


Executions in psql with one persisent connection:
-- Create the table
CREATE TABLE t_random AS SELECT s, md5(random()::text) FROM 
generate_Series(1,1) s;

-- Create the index
CREATE INDEX ON t_random(s);

-- Restart psql with a new connection:

-- Memory goes slighty up after each execution even after canceling:
-- Memory leak on FreeBSD 10.2/Fedora 23 and PostgreSQL 9.4.5 on 
cancel the query or multiple execution

SELECT * FROM t_random ORDER BY md5 LIMIT 10;

-- Therefore I created a function:
CREATE OR REPLACE FUNCTION execmultiplei(IN num int8)
RETURNS void AS $$
BEGIN
  -- RAISE NOTICE 'num=%', num;
  FOR i IN 1..num LOOP
PERFORM * FROM t_random WHERE s = i;
  END LOOP;
END;
$$  LANGUAGE plpgsql;

-- Test it several times
SELECT execmultiplei(1000);

-- Linux testing (FreeBSD is similar), relevant part is RES (resident 
memory):

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
-- after startup of psql
26851 postgres  20   0 2363276   7432   6292 S   0.0  0.2 0:00.00 
postgres: postgres postgres [local] idle

-- Memory goes up, ok so far
26851 postgres  20   0 2365732 255152 253548 R  99.0  6.3 0:10.77 
postgres: postgres postgres [local] SELECT
26851 postgres  20   0 2365732 408464 406788 R 100.0 10.1 0:17.81 
postgres: postgres postgres [local] SELECT
26851 postgres  20   0 2365732 864472 862576 R 100.0 21.4 0:38.90 
postgres: postgres 

Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Gerhard Wiesinger

Hello Bill,

Thank you for your response, comments inline:

On 13.12.2015 16:05, Bill Moran wrote:

On Sun, 13 Dec 2015 09:57:21 +0100
Gerhard Wiesinger  wrote:

some further details from the original FreeBSD 10.1 machine:

Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse

PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIME WCPU COMMAND
77941 pgsql 5  200  7925M  7296M usem2 352:34 6.98%
postgres: username dbnamee 127.0.0.1(43367)  (postgres)



I see no evidence of an actual leak here. Each process is basically using
the 7G of shared_buffers you have allocated in the config (which is only
7G _total_ for all processes, since it's shared memory)


OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?


Out of memory:
kernel: swap_pager_getswapspace(4): failed
kernel: swap_pager_getswapspace(8): failed
kernel: swap_pager_getswapspace(3): failed

Main issue is IHMO (as far as I understood the FreeBSD Memory system)
that 20G are INACTIVE. When I subtract the shared memory, even ~13GB
should be available, but they are still allocated but inactive
(INACTIVE). INACTIVE memory might be clean or dirty. As we get into out
of memory situations it is likely that the memory is dirty (otherwise it
would have been reused).

Not quite correct. Inactive memory is _always_ available for re-use.


Are you sure that's true?

Monitoring inactive memory:
cat vm_stat.sh
#!/usr/bin/env bash

while [ 1 ]; do
  date +%Y.%m.%d.%H.%M.%S
  sysctl -a | grep vm.stats.vm.
  sleep 1
done

And even we get out of memory with swap_pager_getswapspace Inactive 
Memory (from the log file) is around 20GB (doesn't go down or up)

vm.stats.vm.v_inactive_count: 5193530 (*4k pages is around 20GB)

Then we have 20GB inactive memory, but we still get out of memory with 
kernel: swap_pager_getswapspace(4): failed. Any ideas why?





Config:
Memory: 32GB, Swap: 512MB

Probably not the cause of this problem, but this is a non-optimal layout.
If you're going to use swap at all, it should generally be 2x the size of
your RAM for optimal performance under memory pressure.


Yes, we will try to disable it totally. Nevertheless why do we get out 
of memory/Swap?





maintenance_work_mem = 512MB
effective_cache_size = 10GB
work_mem = 892MB

I expect that this value is the cause of the problem. The scenario you
describe below is sorting a large table on an unindexed column, meaning
it will have to use all that work_mem. I'd be interested to see the
output of:

EXPLAIN ANALYZE SELECT * FROM t_random ORDER BY md5 LIMIT 10;


That was only a test query, has nothing to do with production based 
query. They are mostly SELECT/INSERTS/UPDATES on primary keys.



But even without that information, I'd recommend you reduce work_mem
to about 16M or so.


Why so low? E.g. sorting on reporting or some long running queries are 
then done on disk and not in memory.



wal_buffers = 8MB
checkpoint_segments = 16
shared_buffers = 7080MB
max_connections = 80
autovacuum_max_workers = 3

[snip]


We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple
worker processes connected via persistent connections to PostgreSQL,
they perform just simple queries with SELECT on primary keys and
simple INSERTS/UPDATES.

That's not at all the test scenario you show below. The scenario below
is a large sort operation on a non-indexed column, which is vastly
different than a single-row fetch based on an index.


Yes, that non indexed select was just for testing purporeses.




Normally nearly all the workers are idle but
they still consume the maximum configured work mem on the PostgreSQL
server and the memory is also resident.

I see no evidence of that in your top output. Each process has a
reference to the 7G of shared_buffers you allocated, which is memory
shared by all processes, and is expected. I'm not as familiar with
Linux top, but the results appear to be the same.


Yes, might be the case, but if it is n times 7G shared memory then we 
have ~20GB Inactive Memory available, so plenty of memory. And why are 
we getting: kernel: swap_pager_getswapspace(4): failed?


Thnx.

Ciao,
Gerhard



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Tom Lane
Gerhard Wiesinger  writes:
>> Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
>> Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse

> OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?

Just judging from the name of the function, I would bet this is a direct
result of having only 512M of swap configured.  As Bill already pointed
out, that's a pretty useless choice on a system with 32G of RAM.  As soon
as the kernel tries to push out any significant amount of idle processes,
it's gonna be out of swap space.  The numbers you show above prove that
it is almost out of free swap already.

Also, while that 20G of "inactive" pages may be candidates for reuse,
they probably can't actually be reused without swapping them out ...
and there's noplace for that data to go.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Bill Moran
On Sun, 13 Dec 2015 16:35:08 +0100
Gerhard Wiesinger  wrote:

> Hello Bill,
> 
> Thank you for your response, comments inline:
> 
> On 13.12.2015 16:05, Bill Moran wrote:
> > On Sun, 13 Dec 2015 09:57:21 +0100
> > Gerhard Wiesinger  wrote:
> >> some further details from the original FreeBSD 10.1 machine:
> >>
> >> Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
> >> Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse
> >>
> >> PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIME WCPU 
> >> COMMAND
> >> 77941 pgsql 5  200  7925M  7296M usem2 352:34 6.98%
> >> postgres: username dbnamee 127.0.0.1(43367)  (postgres)
> > 
> >
> > I see no evidence of an actual leak here. Each process is basically using
> > the 7G of shared_buffers you have allocated in the config (which is only
> > 7G _total_ for all processes, since it's shared memory)
> 
> OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?

You haven't provided enough information to isolate that cause yet. What's
in the Postgres log? Surely it will have logged something when its request
for RAM was denied, and it should be more informational than the OS'
generic message.

> >> Out of memory:
> >> kernel: swap_pager_getswapspace(4): failed
> >> kernel: swap_pager_getswapspace(8): failed
> >> kernel: swap_pager_getswapspace(3): failed
> >>
> >> Main issue is IHMO (as far as I understood the FreeBSD Memory system)
> >> that 20G are INACTIVE. When I subtract the shared memory, even ~13GB
> >> should be available, but they are still allocated but inactive
> >> (INACTIVE). INACTIVE memory might be clean or dirty. As we get into out
> >> of memory situations it is likely that the memory is dirty (otherwise it
> >> would have been reused).
> > Not quite correct. Inactive memory is _always_ available for re-use.
> 
> Are you sure that's true?

Yes. Read The Design and Implementation of FreeBSD for the details.

> Monitoring inactive memory:
> cat vm_stat.sh
> #!/usr/bin/env bash
> 
> while [ 1 ]; do
>date +%Y.%m.%d.%H.%M.%S
>sysctl -a | grep vm.stats.vm.
>sleep 1
> done
> 
> And even we get out of memory with swap_pager_getswapspace Inactive 
> Memory (from the log file) is around 20GB (doesn't go down or up)
> vm.stats.vm.v_inactive_count: 5193530 (*4k pages is around 20GB)
> 
> Then we have 20GB inactive memory, but we still get out of memory with 
> kernel: swap_pager_getswapspace(4): failed. Any ideas why?

Theory: If the planner decides it needs to do 30 sort operations for a
query, it will try to allocate 27G of RAM, which exceeds what's available,
and therefore never gets allocated. So you get the "out of space" message,
but the actual memory usage doesn't change.

> >> maintenance_work_mem = 512MB
> >> effective_cache_size = 10GB
> >> work_mem = 892MB
> > I expect that this value is the cause of the problem. The scenario you
> > describe below is sorting a large table on an unindexed column, meaning
> > it will have to use all that work_mem. I'd be interested to see the
> > output of:
> >
> > EXPLAIN ANALYZE SELECT * FROM t_random ORDER BY md5 LIMIT 10;
> 
> That was only a test query, has nothing to do with production based 
> query. They are mostly SELECT/INSERTS/UPDATES on primary keys.

Then provide the _actual_ queries and the EXPLAIN ANALYZE and the table
layouts and basic data distribution of the actual cause. If your test
case is completely non-representative of what's happening, then you're
not going to get useful answers.

> > But even without that information, I'd recommend you reduce work_mem
> > to about 16M or so.
> 
> Why so low? E.g. sorting on reporting or some long running queries are 
> then done on disk and not in memory.

Even a simple query could involve multiple sorts, and you're allowing
each sort to use up to 890M of RAM (which is _not_ shared). As noted
earlier, even a moderately complex query could exceed the available
RAM on the system. But since you don't provide the actual queries and
tables causing problems, I can only guess. And since you appear to
have already decided what the cause of the problem is, then crafted
completely non-relevent queries that you think prove your point, I'm
not sure there's anything I can do to help you.

> >> wal_buffers = 8MB
> >> checkpoint_segments = 16
> >> shared_buffers = 7080MB
> >> max_connections = 80
> >> autovacuum_max_workers = 3
> > [snip]
> >
> >>> We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple
> >>> worker processes connected via persistent connections to PostgreSQL,
> >>> they perform just simple queries with SELECT on primary keys and
> >>> simple INSERTS/UPDATES.
> > That's not at all the test scenario you show below. The scenario below
> > is a large sort operation on a non-indexed column, which is vastly
> > different than a single-row fetch based on an index.
> 
> Yes, that non indexed select was just for testing purporeses.


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Bill Moran
On Sun, 13 Dec 2015 09:57:21 +0100
Gerhard Wiesinger  wrote:
> 
> some further details from the original FreeBSD 10.1 machine:
> 
> Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
> Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse
> 
>PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIME WCPU COMMAND
> 77941 pgsql 5  200  7925M  7296M usem2 352:34 6.98% 
> postgres: username dbnamee 127.0.0.1(43367)  (postgres)



I see no evidence of an actual leak here. Each process is basically using
the 7G of shared_buffers you have allocated in the config (which is only
7G _total_ for all processes, since it's shared memory)

> Out of memory:
> kernel: swap_pager_getswapspace(4): failed
> kernel: swap_pager_getswapspace(8): failed
> kernel: swap_pager_getswapspace(3): failed
> 
> Main issue is IHMO (as far as I understood the FreeBSD Memory system) 
> that 20G are INACTIVE. When I subtract the shared memory, even ~13GB 
> should be available, but they are still allocated but inactive 
> (INACTIVE). INACTIVE memory might be clean or dirty. As we get into out 
> of memory situations it is likely that the memory is dirty (otherwise it 
> would have been reused).

Not quite correct. Inactive memory is _always_ available for re-use.

> Config:
> Memory: 32GB, Swap: 512MB

Probably not the cause of this problem, but this is a non-optimal layout.
If you're going to use swap at all, it should generally be 2x the size of
your RAM for optimal performance under memory pressure.

> maintenance_work_mem = 512MB
> effective_cache_size = 10GB
> work_mem = 892MB

I expect that this value is the cause of the problem. The scenario you
describe below is sorting a large table on an unindexed column, meaning
it will have to use all that work_mem. I'd be interested to see the
output of:

EXPLAIN ANALYZE SELECT * FROM t_random ORDER BY md5 LIMIT 10;

But even without that information, I'd recommend you reduce work_mem
to about 16M or so.

> wal_buffers = 8MB
> checkpoint_segments = 16
> shared_buffers = 7080MB
> max_connections = 80
> autovacuum_max_workers = 3

[snip]

> > We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple 
> > worker processes connected via persistent connections to PostgreSQL, 
> > they perform just simple queries with SELECT on primary keys and 
> > simple INSERTS/UPDATES.

That's not at all the test scenario you show below. The scenario below
is a large sort operation on a non-indexed column, which is vastly
different than a single-row fetch based on an index.

> > Normally nearly all the workers are idle but 
> > they still consume the maximum configured work mem on the PostgreSQL 
> > server and the memory is also resident.

I see no evidence of that in your top output. Each process has a
reference to the 7G of shared_buffers you allocated, which is memory
shared by all processes, and is expected. I'm not as familiar with
Linux top, but the results appear to be the same.

> > If some other queries get in 
> > we get into out of memory situations. So it looks like PostgreSQL has 
> > memory leaks.
> >
> > I found a test scenario to reproduce it also on a newer FreeBSD 10.2 
> > VM as well as in a Fedora 23 VM (both with PostgreSQL 9.4.5):
> >
> > Executions in psql with one persisent connection:
> > -- Create the table
> > CREATE TABLE t_random AS SELECT s, md5(random()::text) FROM 
> > generate_Series(1,1) s;
> > -- Create the index
> > CREATE INDEX ON t_random(s);
> >
> > -- Restart psql with a new connection:
> >
> > -- Memory goes slighty up after each execution even after canceling:
> > -- Memory leak on FreeBSD 10.2/Fedora 23 and PostgreSQL 9.4.5 on 
> > cancel the query or multiple execution
> > SELECT * FROM t_random ORDER BY md5 LIMIT 10;
> >
> > -- Therefore I created a function:
> > CREATE OR REPLACE FUNCTION execmultiplei(IN num int8)
> > RETURNS void AS $$
> > BEGIN
> >   -- RAISE NOTICE 'num=%', num;
> >   FOR i IN 1..num LOOP
> > PERFORM * FROM t_random WHERE s = i;
> >   END LOOP;
> > END;
> > $$  LANGUAGE plpgsql;
> >
> > -- Test it several times
> > SELECT execmultiplei(1000);
> >
> > -- Linux testing (FreeBSD is similar), relevant part is RES (resident 
> > memory):
> >   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> > -- after startup of psql
> > 26851 postgres  20   0 2363276   7432   6292 S   0.0  0.2 0:00.00 
> > postgres: postgres postgres [local] idle
> > -- Memory goes up, ok so far
> > 26851 postgres  20   0 2365732 255152 253548 R  99.0  6.3 0:10.77 
> > postgres: postgres postgres [local] SELECT
> > 26851 postgres  20   0 2365732 408464 406788 R 100.0 10.1 0:17.81 
> > postgres: postgres postgres [local] SELECT
> > 26851 postgres  20   0 2365732 864472 862576 R 100.0 21.4 0:38.90 
> > postgres: postgres postgres [local] SELECT
> > -- Function execmultiplei and transaction terminated, but memory still 
> > allocated!!!
> > 26851 postgres  20   0 

Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Gerhard Wiesinger

On 13.12.2015 18:17, Tom Lane wrote:

Gerhard Wiesinger  writes:

Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse

OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?

Just judging from the name of the function, I would bet this is a direct
result of having only 512M of swap configured.  As Bill already pointed
out, that's a pretty useless choice on a system with 32G of RAM.  As soon
as the kernel tries to push out any significant amount of idle processes,
it's gonna be out of swap space.  The numbers you show above prove that
it is almost out of free swap already.


The system wasn't designed by me, I wouldn't do it either that way. Does 
swapoff help?




Also, while that 20G of "inactive" pages may be candidates for reuse,
they probably can't actually be reused without swapping them out ...
and there's noplace for that data to go.


There is no log in syslog (where postgres log) when 
swap_pager_getswapspace is logged.


But why do we have 20G of Inactive pages? They are still allocated by 
kernel or user space. As you can see below (top output) NON Postgres 
processes are around 9G in virtual size, resident even lower. The system 
is nearly idle, and the queries typically aren't active after one second 
agin. Therefore where does the rest of the 11G of Inactive pages come 
from (if it isn't a Postgres/FreeBSD memory leak)?

I read that Postgres has it's own memory allocator:
https://www.reddit.com/r/programming/comments/18zija/github_got_30_better_performance_using_tcmalloc/
Might that be an issue with double allocation/freeing and the "cheese 
hole" topic with memory fragmentation?


https://www.opennet.ru/base/dev/fbsdvm.txt.html
inactivepages not actively used by programs which are
dirty and (at some point) need to be written
to their backing store (typically disk).
These pages are still associated with objects and
can be reclaimed if a program references them.
Pages can be moved from the active to the inactive
queue at any time with little adverse effect.
Moving pages to the cache queue has bigger
consequences (note 1)

https://unix.stackexchange.com/questions/134862/what-do-the-different-memory-counters-in-freebsd-mean
Active: Memory currently being used by a process
Inactive: Memory that has been freed but is still cached since it 
may be used again. If more Free memory is required, this memory can be 
cleared and become free. This memory is not cleared before it is needed, 
because "free memory is wasted memory", it doesn't cost anything to keep 
the old data around in case it is needed again.
Wired: Memory in use by the Kernel. This memory cannot be swapped 
out (GW: including ZFS cache!!!)
Cache: Memory being used to cache data, can be freed immediately if 
required

Buffers: Disk cache
Free: Memory that is completely free and ready to use. Inactive, 
Cache and Buffers can become free if they are cleaned up.


Thnx.

Ciao,
Gerhard


last pid:  7277;  load averages:  0.91,  0.96,  1.02  up 18+06:22:31
18:57:54

135 processes: 2 running, 132 sleeping, 1 waiting

Mem: 8020M Active, 19G Inact, 3537M Wired, 299M Cache, 1679M Buf, 38M Free
Swap: 512M Total, 501M Used, 12M Free, 97% Inuse

  PID USERNAMETHR PRI NICE   SIZERES STATE   C TIMEWCPU COMMAND
77941 pgsql 5  200  7921M  7295M usem7 404:32  10.25% 
postgres

79570 pgsql 1  200  7367M  6968M sbwait  6 4:24   0.59% postgres
73449 pgsql 1  270  7367M  6908M sbwait  4 8:08   5.08% postgres
74209 pgsql 1  270  7367M  6803M sbwait  0 6:37   1.46% postgres
74207 pgsql 1  250  7367M  6748M sbwait  6 6:34   1.76% postgres
74206 pgsql 1  220  7367M  6548M sbwait  5 6:44   1.86% postgres
73380 pgsql 1  230  7367M  7265M sbwait  4 8:15   1.17% postgres
74208 pgsql 1  240  7367M  7223M sbwait  1 6:30   4.59% postgres
79569 pgsql 1  240  7367M  7105M sbwait  3 4:36   1.17% postgres
74210 pgsql 1  290  7363M  7182M sbwait  5 6:41   5.47% postgres
73479 pgsql 1  220  7363M  6560M sbwait  6 7:14   3.56% postgres
83030 pgsql 1  200  7329M   193M sbwait  5 0:00   0.00% postgres
76178 pgsql 1  200  7323M  7245M sbwait  2 3:44   0.00% postgres
75867 pgsql 1  200  7323M  7245M sbwait  2 3:45   0.00% postgres
75869 pgsql 1  200  7323M  7245M sbwait  2 3:46   0.00% postgres
75883 pgsql 1  200  7323M  7245M sbwait  7 3:46   0.00% postgres
76180 pgsql   

Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Gerhard Wiesinger

On 13.12.2015 21:14, Bill Moran wrote:

Wait ... this is a combined HTTP/Postgres server? You didn't mention that
earlier, and it's kind of important.

What evidence do you have that Postgres is actually the part of
this system running out of memory?


For me the complete picture doesn't look consistent.


I don't see any such evidence in any of
your emails, and (based on experience) I find it pretty likely that whatever
is running under node is doing something in a horrifically memory-inefficient
manner. Since you mention that you see nothing in the PG logs, that makes it
even more likely (to me) that you're looking entirely in the wrong place.

I'd be willing to bet a steak dinner that if you put the web server on a
different server than the DB, that the memory problems would follow the
web server and not the DB server.


Changes in config:
track_activity_query_size = 102400
work_mem = 100MB

Ok, we restarted PostgreSQL and had it stopped for seconds, and logged 
top every second:


When PostgreSQL was down nearly all memory was freed, looks good to me. 
So it is likely that node and other processes are not the cause.

Mem: 742M Active, 358M Inact, 1420M Wired, 21G Cache, 871M Buf, 8110M Free
Swap: 512M Total, 477M Used, 35M Free, 93% Inuse

When PostgreSQL restarted, Inactive was growing fast (~1min):
Mem: 7998M Active, 18G Inact, 2763M Wired, 1766M Cache, 1889M Buf, 1041M 
Free

Swap: 512M Total, 472M Used, 41M Free, 92% Inuse

After some few minutes we are back again at the same situation:
Mem: 8073M Active, 20G Inact, 2527M Wired, 817M Cache, 1677M Buf, 268M Free
Swap: 512M Total, 472M Used, 41M Free, 92% Inuse

The steak dinner is mine :-) Donating to the PostgreSQL community :-)

Any further ideas, I don't think this is normal system behaviour.

Ciao,
Gerhard



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Bill Moran
On Sun, 13 Dec 2015 20:09:04 +0100
Gerhard Wiesinger  wrote:

> On 13.12.2015 18:17, Tom Lane wrote:
> > Gerhard Wiesinger  writes:
> >>> Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
> >>> Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse
> >> OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?
> > Just judging from the name of the function, I would bet this is a direct
> > result of having only 512M of swap configured.  As Bill already pointed
> > out, that's a pretty useless choice on a system with 32G of RAM.  As soon
> > as the kernel tries to push out any significant amount of idle processes,
> > it's gonna be out of swap space.  The numbers you show above prove that
> > it is almost out of free swap already.
> 
> The system wasn't designed by me, I wouldn't do it either that way. Does 
> swapoff help?

FreeBSD and Linux (and most modern OS) are designed to have swap,
and usually more swap than RAM. I have never heard a good reason for
not using swap, and the reasons I _have_ heard have always been by
people misinformed about how the OS works.

If someone has a _good_ explanation for why you wouldn't want any
swap on a DB server, I'd love to hear it; but everything I've heard
up till now has been speculation based on misinformation.

IOW: no, you should not turn swap off, you should instead allocate
the appropriate amount of swap space.

> > Also, while that 20G of "inactive" pages may be candidates for reuse,
> > they probably can't actually be reused without swapping them out ...
> > and there's noplace for that data to go.
> 
> There is no log in syslog (where postgres log) when 
> swap_pager_getswapspace is logged.
> 
> But why do we have 20G of Inactive pages? They are still allocated by 
> kernel or user space. As you can see below (top output) NON Postgres 
> processes are around 9G in virtual size, resident even lower. The system 
> is nearly idle, and the queries typically aren't active after one second 
> agin. Therefore where does the rest of the 11G of Inactive pages come 
> from (if it isn't a Postgres/FreeBSD memory leak)?
> I read that Postgres has it's own memory allocator:
> https://www.reddit.com/r/programming/comments/18zija/github_got_30_better_performance_using_tcmalloc/
> Might that be an issue with double allocation/freeing and the "cheese 
> hole" topic with memory fragmentation?

If there were a memory leak in either FreeBSD or Postgres of the
seriousness you're describing that were as easy to trigger as you
claim, I would expect the mailing lists and other support forums
to be exploding in panic. Notice that they are not. Also, I still
don't see _ANY_ evidence of a leak. I see evidence that something
is happening that is trying to allocate a LOT of RAM, that isn't
available on your system; but that's not the same as a leak.

> https://www.opennet.ru/base/dev/fbsdvm.txt.html
>  inactivepages not actively used by programs which are
>  dirty and (at some point) need to be written
>  to their backing store (typically disk).
>  These pages are still associated with objects and
>  can be reclaimed if a program references them.
>  Pages can be moved from the active to the inactive
>  queue at any time with little adverse effect.
>  Moving pages to the cache queue has bigger
>  consequences (note 1)

Correct, but, when under pressure, the system _will_ recycle those
pages to be available.

Tom might be correct in that the system thinks they are inactive
because it could easily push them out to swap, but then it can't
_actually_ do that because you haven't allocated enough swap, but
that doesn't match my understanding of how inactive is used. A
question of that detail would be better asked on a FreeBSD forum,
as the differences between different VM implementations can be
pretty specific and technical.

[snip]

> Mem: 8020M Active, 19G Inact, 3537M Wired, 299M Cache, 1679M Buf, 38M Free
> Swap: 512M Total, 501M Used, 12M Free, 97% Inuse
> 
>PID USERNAMETHR PRI NICE   SIZERES STATE   C TIMEWCPU COMMAND
> 77941 pgsql 5  200  7921M  7295M usem7 404:32  10.25% 
> postgres
> 79570 pgsql 1  200  7367M  6968M sbwait  6 4:24   0.59% postgres

[snip about 30 identical PG processes]

> 32387 myusername9  200   980M   375M uwait   5 69:03   1.27% node

[snip similar processes]

>622 myusername1  200   261M  3388K kqread  3 41:01   0.00% nginx

[snip similar processes]

Wait ... this is a combined HTTP/Postgres server? You didn't mention that
earlier, and it's kind of important.

What evidence do you have that Postgres is actually the part of
this system running out of memory? I don't see any such evidence in any of
your emails, and 

Re: [GENERAL] Memory Leak executing small queries without closing the connection - FreeBSD

2015-12-13 Thread Bill Moran
On Sun, 13 Dec 2015 22:23:19 +0100
Gerhard Wiesinger  wrote:

> On 13.12.2015 21:14, Bill Moran wrote:
> > Wait ... this is a combined HTTP/Postgres server? You didn't mention that
> > earlier, and it's kind of important.
> >
> > What evidence do you have that Postgres is actually the part of
> > this system running out of memory?
> 
> For me the complete picture doesn't look consistent.

That's because you haven't gathered enough of the right type of information.

> > I don't see any such evidence in any of
> > your emails, and (based on experience) I find it pretty likely that whatever
> > is running under node is doing something in a horrifically 
> > memory-inefficient
> > manner. Since you mention that you see nothing in the PG logs, that makes it
> > even more likely (to me) that you're looking entirely in the wrong place.
> >
> > I'd be willing to bet a steak dinner that if you put the web server on a
> > different server than the DB, that the memory problems would follow the
> > web server and not the DB server.
> 
> Changes in config:
> track_activity_query_size = 102400
> work_mem = 100MB
> 
> Ok, we restarted PostgreSQL and had it stopped for seconds, and logged 
> top every second:
> 
> When PostgreSQL was down nearly all memory was freed, looks good to me. 
> So it is likely that node and other processes are not the cause.
> Mem: 742M Active, 358M Inact, 1420M Wired, 21G Cache, 871M Buf, 8110M Free
> Swap: 512M Total, 477M Used, 35M Free, 93% Inuse
> 
> When PostgreSQL restarted, Inactive was growing fast (~1min):
> Mem: 7998M Active, 18G Inact, 2763M Wired, 1766M Cache, 1889M Buf, 1041M 
> Free
> Swap: 512M Total, 472M Used, 41M Free, 92% Inuse
> 
> After some few minutes we are back again at the same situation:
> Mem: 8073M Active, 20G Inact, 2527M Wired, 817M Cache, 1677M Buf, 268M Free
> Swap: 512M Total, 472M Used, 41M Free, 92% Inuse
> 
> The steak dinner is mine :-) Donating to the PostgreSQL community :-)

Based on the fact that the inactive memory increased? Your understanding
of inactive memory in FreeBSD is incorrect. Those pages are probably DB
pages that the OS is keeping in inactive memory because Postgres requests
them over and over, which is what the OS is supposed to do to ensure the
best performance. Are you seeing any out of swap space errors? Even if
you are, you still haven't determined if the problem is the result of
Postgres or the node.js stuff you have running. I don't know what node.js
might be caching on the client side ... do you?

No. Until you can actually report back something other than wild
speculation, I'll keep that steak dinner for myself. Besides, that bet
was based on you putting the PG server on seperate hardware from the
web server, which you didn't do.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory Leak executing small queries without closing the connection

2015-12-12 Thread Gerhard Wiesinger

Hello,

We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple worker 
processes connected via persistent connections to PostgreSQL, they 
perform just simple queries with SELECT on primary keys and simple 
INSERTS/UPDATES. Normally nearly all the workers are idle but they still 
consume the maximum configured work mem on the PostgreSQL server and the 
memory is also resident. If some other queries get in we get into out of 
memory situations. So it looks like PostgreSQL has memory leaks.


I found a test scenario to reproduce it also on a newer FreeBSD 10.2 VM 
as well as in a Fedora 23 VM (both with PostgreSQL 9.4.5):


Executions in psql with one persisent connection:
-- Create the table
CREATE TABLE t_random AS SELECT s, md5(random()::text) FROM 
generate_Series(1,1) s;

-- Create the index
CREATE INDEX ON t_random(s);

-- Restart psql with a new connection:

-- Memory goes slighty up after each execution even after canceling:
-- Memory leak on FreeBSD 10.2/Fedora 23 and PostgreSQL 9.4.5 on cancel 
the query or multiple execution

SELECT * FROM t_random ORDER BY md5 LIMIT 10;

-- Therefore I created a function:
CREATE OR REPLACE FUNCTION execmultiplei(IN num int8)
RETURNS void AS $$
BEGIN
  -- RAISE NOTICE 'num=%', num;
  FOR i IN 1..num LOOP
PERFORM * FROM t_random WHERE s = i;
  END LOOP;
END;
$$  LANGUAGE plpgsql;

-- Test it several times
SELECT execmultiplei(1000);

-- Linux testing (FreeBSD is similar), relevant part is RES (resident 
memory):

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
-- after startup of psql
26851 postgres  20   0 2363276   7432   6292 S   0.0  0.2 0:00.00 
postgres: postgres postgres [local] idle

-- Memory goes up, ok so far
26851 postgres  20   0 2365732 255152 253548 R  99.0  6.3 0:10.77 
postgres: postgres postgres [local] SELECT
26851 postgres  20   0 2365732 408464 406788 R 100.0 10.1 0:17.81 
postgres: postgres postgres [local] SELECT
26851 postgres  20   0 2365732 864472 862576 R 100.0 21.4 0:38.90 
postgres: postgres postgres [local] SELECT
-- Function execmultiplei and transaction terminated, but memory still 
allocated!!!
26851 postgres  20   0 2365732 920668 918748 S   0.0 22.7 0:41.40 
postgres: postgres postgres [local] idle

-- Calling it again
26851 postgres  20   0 2365732 920668 918748 R  99.0 22.7 0:46.51 
postgres: postgres postgres [local] SELECT

-- idle again, memory still allocated
26851 postgres  20   0 2365732 920668 918748 S   0.0 22.7 1:22.54 
postgres: postgres postgres [local] idle


Memory will only be released if psql is exited. According to the 
PostgreSQL design memory should be freed when the transaction completed.


top commands on FreeBSD: top -SaPz -o res -s 1
top commands on Linux: top -o RES d1

Config: VMs with 4GB of RAM, 2 vCPUs
shared_buffers = 2048MB # min 128kB
effective_cache_size = 2GB
work_mem = 892MB
wal_buffers = 8MB
checkpoint_segments = 16

Any ideas?

Thank you.

Ciao,
Gerhard



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory for BYTEA returned by C function is not released until connection is dropped

2015-09-21 Thread John Leiseboer
I have written a number of functions in C that return BYTEA type. I have 
compiled and run on both Windows and Linux, 32-bit and 64-bit, PostgreSQL 
versions 9.3 and 9.4.

My functions return BYTEA data to the caller. The problem is that memory usage 
grows until there is no memory left on the host, at which point an error is 
returned. If I drop the connection (e.g. by quitting from pqsql), the memory is 
returned.

I wrote the following minimal function to test palloc() and BYTEA return 
behaviour, and found that this minimal program also exhibits the unbounded 
memory growth problem.


C source code:

PG_FUNCTION_INFO_V1(test_palloc);
Datum test_palloc()
{
bytea *test_ret;
int test_len = 1024;

test_ret = (bytea *)palloc(test_len + VARHDRSZ);
SET_VARSIZE(test_ret, test_len + VARHDRSZ);
PG_RETURN_BYTEA_P(test_ret);
}

Function definition:

CREATE OR REPLACE FUNCTION test_palloc() RETURNS BYTEA
AS E'', test_palloc' LANGUAGE C IMMUTABLE STRICT;

psql commands to reproduce the problem:

\o out.txt
SELECT ids.*, test_palloc() FROM GENERATE_SERIES(1, 100) ids;

At the completion of the above command, host memory will have been consumed but 
not released back to the system. After quitting psql (\q), memory is released.

Is this expected behaviour or a bug? Am I doing something wrong? How can I 
return a BYTEA type from a C library function without having to drop the 
connection in order to recover the allocated memory that is returned to the 
caller?

Regards,
John


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory for BYTEA returned by C function is not released until connection is dropped

2015-09-21 Thread Pavel Stehule
2015-09-21 4:31 GMT+02:00 John Leiseboer :

> I have written a number of functions in C that return BYTEA type. I have
> compiled and run on both Windows and Linux, 32-bit and 64-bit, PostgreSQL
> versions 9.3 and 9.4.
>
> My functions return BYTEA data to the caller. The problem is that memory
> usage grows until there is no memory left on the host, at which point an
> error is returned. If I drop the connection (e.g. by quitting from pqsql),
> the memory is returned.
>
> I wrote the following minimal function to test palloc() and BYTEA return
> behaviour, and found that this minimal program also exhibits the unbounded
> memory growth problem.
>
>
> C source code:
>
> PG_FUNCTION_INFO_V1(test_palloc);
> Datum test_palloc()
> {
> bytea *test_ret;
> int test_len = 1024;
>
> test_ret = (bytea *)palloc(test_len + VARHDRSZ);
> SET_VARSIZE(test_ret, test_len + VARHDRSZ);
> PG_RETURN_BYTEA_P(test_ret);
> }
>
> Function definition:
>
> CREATE OR REPLACE FUNCTION test_palloc() RETURNS BYTEA
> AS E'', test_palloc' LANGUAGE C IMMUTABLE STRICT;
>
> psql commands to reproduce the problem:
>
> \o out.txt
> SELECT ids.*, test_palloc() FROM GENERATE_SERIES(1, 100) ids;
>
> At the completion of the above command, host memory will have been
> consumed but not released back to the system. After quitting psql (\q),
> memory is released.
>
> Is this expected behaviour or a bug? Am I doing something wrong? How can I
> return a BYTEA type from a C library function without having to drop the
> connection in order to recover the allocated memory that is returned to the
> caller?
>
>
This memory is release, when memory context is dropped.

http://www.neilconway.org/talks/hacking/hack_slides.pdf look on slide 15

Regards

Pavel


> Regards,
> John
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Memory for BYTEA returned by C function is not released until connection is dropped

2015-09-20 Thread John Leiseboer
Tom Lane [mailto:t...@sss.pgh.pa.us] writes:
> But at any rate, bottom line is that your problem is client-side not 
> server-side, and no amount of fooling with the function innards will change 
> it.

I wish it were. While monitoring memory on Linux and Windows machines I see 
that psql memory usage hardly changes, but PostgreSQL server memory usage 
increases steadily until the query stops. PostgreSQL server memory usage stays 
high until after the client drops the connection. This is definitely a case of 
the server holding onto memory until the client drops the connection.

In other case, when I let the query continue until memory is exhausted, the 
PostgreSQL server crashes with "out of memory" error, not the client.

When does the PostgreSQL server call pfree() after a C function has returned to 
the caller? All I've found in books and Google searches is:

"What makes palloc() special is that it allocates the memory in the current 
context and the whole memory is freed in one go when the context is destroyed."

What "context"? The connection? The transaction? A SQL statement? The function 
call?

John


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory for BYTEA returned by C function is not released until connection is dropped

2015-09-20 Thread John Leiseboer
I have written a number of functions in C that return BYTEA type. I have 
compiled and run on both Windows and Linux, 32-bit and 64-bit, PostgreSQL 
versions 9.3 and 9.4.

My functions return BYTEA data to the caller. The problem is that memory usage 
grows until there is no memory left on the host, at which point an error is 
returned. If I drop the connection (e.g. by quitting from pqsql), the memory is 
returned.

I wrote the following minimal function to test palloc() and BYTEA return 
behaviour, and found that this minimal program also exhibits the unbounded 
memory growth problem.


C source code:

PG_FUNCTION_INFO_V1(test_palloc);
Datum test_palloc()
{
bytea *test_ret;
int test_len = 1024;

test_ret = (bytea *)palloc(test_len + VARHDRSZ);
SET_VARSIZE(test_ret, test_len + VARHDRSZ);
PG_RETURN_BYTEA_P(test_ret);
}

Function definition:

CREATE OR REPLACE FUNCTION test_palloc() RETURNS BYTEA AS E'', test_palloc' LANGUAGE C IMMUTABLE STRICT;

psql commands to reproduce the problem:

\o out.txt
SELECT ids.*, test_palloc() FROM GENERATE_SERIES(1, 100) ids;

At the completion of the above command, host memory will have been consumed but 
not released back to the system. After quitting psql (\q), memory is released.

Is this expected behaviour or a bug? Am I doing something wrong? How can I 
return a BYTEA type from a C library function without having to drop the 
connection in order to recover the allocated memory that is returned to the 
caller?

Regards,
John



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory for BYTEA returned by C function is not released until connection is dropped

2015-09-20 Thread Tom Lane
John Leiseboer  writes:
> I have written a number of functions in C that return BYTEA type. I have 
> compiled and run on both Windows and Linux, 32-bit and 64-bit, PostgreSQL 
> versions 9.3 and 9.4.
> My functions return BYTEA data to the caller. The problem is that memory 
> usage grows until there is no memory left on the host, at which point an 
> error is returned. If I drop the connection (e.g. by quitting from pqsql), 
> the memory is returned.

> I wrote the following minimal function to test palloc() and BYTEA return 
> behaviour, and found that this minimal program also exhibits the unbounded 
> memory growth problem.

> C source code:

> PG_FUNCTION_INFO_V1(test_palloc);
> Datum test_palloc()
> {
>   bytea *test_ret;
>   int test_len = 1024;

>   test_ret = (bytea *)palloc(test_len + VARHDRSZ);
>   SET_VARSIZE(test_ret, test_len + VARHDRSZ);
>   PG_RETURN_BYTEA_P(test_ret);
> }

> Function definition:

> CREATE OR REPLACE FUNCTION test_palloc() RETURNS BYTEA AS E' library>', test_palloc' LANGUAGE C IMMUTABLE STRICT;

> psql commands to reproduce the problem:

> \o out.txt
> SELECT ids.*, test_palloc() FROM GENERATE_SERIES(1, 100) ids;

> At the completion of the above command, host memory will have been consumed 
> but not released back to the system. After quitting psql (\q), memory is 
> released.

Well, first off, it's not that function that is eating memory: if you've
got it marked as IMMUTABLE then it will in fact only be executed *once*
per query.  I don't see any evidence of memory leakage on the server
at all when trying a comparable query.  (Disclaimer: I just used
"repeat('x', 1024)" rather than bothering to compile up a .so.  But I
do not think it behaves differently.)

What I do see bloating is psql, which is absorbing a query result of
about 1GB (100 1kB-sized rows) and then spending a lot of cycles
to pretty-print that while writing it to out.txt.

On my Linux box, psql does release the memory back to the kernel when
that's over.  But that would depend on the behavior of libc, so it's
quite plausible that some other platforms would not.

The way to avoid this is to not ask the client program to absorb the whole
1GB-sized query result at once.  You could use a cursor and FETCH a few
thousand rows at a time.  (In reasonably recent versions of psql, "\set
FETCH_COUNT" can do that for you automatically, at the cost of possibly
less pretty output formatting.)  Or use COPY, which is implemented in more
of a streaming style to begin with.

But at any rate, bottom line is that your problem is client-side not
server-side, and no amount of fooling with the function innards will
change it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory Utilization Issue

2015-05-20 Thread Sachin Srivastava
Dear Concern,


Always my server memory utilization is remain 99%. I have 4 DB server and
RAM of the server is (32 GB, 64 GB, 64 GB and 132 GB*). In every server
always we are getting the memory utilization  99%. *Kindly suggest why
this problem is and which parameter will resolve this problem.

I am using PostgreSQL 9.1.2 and PostGIS 1.5 version on every Red hat Linux
server. Kindly find the TOP result, ulimit -a  result,
(kernel.shmax, kernel.shmall, kernel.sem value) and pg_setting result
of one server as below. If you require any other information then inform to
me.





[root@CPPMOMA_DB01 ~]# top

top - 12:08:08 up 15 days, 15:45,  5 users,  load average: 1.05, 1.19, 1.35

Tasks: 605 total,   1 running, 604 sleeping,   0 stopped,   0 zombie

Cpu(s):  6.9%us,  2.1%sy,  0.0%ni, 78.3%id, 12.7%wa,  0.0%hi,  0.1%si,
0.0%st
Mem:  32832364k total, 32621168k used,   211196k free,77572k buffers


[root@CPPMOMA_DB01 ~]# ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 256323
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 4096
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 256323
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
[root@CPPMOMA_DB01 ~]#


kernel.shmmax = 32212254720

kernel.shmall = 1073741824

kernel.sem = 250 32000 100 384


postgres=# SELECT name, source, setting FROM pg_settings WHERE source !=
'default' AND source != 'override' ORDER by 2, 1;

name|source| setting

+--+--

 application_name   | client   | psql.bin

 client_encoding| client   | UTF8

 archive_command| configuration file   | cp %p
/dbkup/momacpp_213_live/%f

 archive_mode   | configuration file   | on

 autovacuum | configuration file   | on

 autovacuum_max_workers | configuration file   | 3

 checkpoint_segments| configuration file   | 200

 checkpoint_timeout | configuration file   | 300

 checkpoint_warning | configuration file   | 30

 DateStyle  | configuration file   | ISO, MDY

 default_text_search_config | configuration file   | pg_catalog.english

 effective_cache_size   | configuration file   | 524288

 lc_messages| configuration file   | en_US.UTF-8

 lc_monetary| configuration file   | en_US.UTF-8

 lc_numeric | configuration file   | en_US.UTF-8

 lc_time| configuration file   | en_US.UTF-8

 listen_addresses   | configuration file   | *

 log_destination| configuration file   | stderr

 log_directory  | configuration file   | pg_log

 logging_collector  | configuration file   | on

 log_line_prefix| configuration file   | %t

 log_rotation_age   | configuration file   | 1440

 maintenance_work_mem   | configuration file   | 1638400

 max_connections| configuration file   | 2000

 max_files_per_process  | configuration file   | 2000

 max_wal_senders| configuration file   | 5

 port   | configuration file   | 5432

 shared_buffers | configuration file   | 1572864

 temp_buffers   | configuration file   | 4096

 wal_level  | configuration file   | archive

 work_mem   | configuration file   | 32768

 log_timezone   | environment variable | Asia/Kolkata

 max_stack_depth| environment variable | 2048

 TimeZone   | environment variable | Asia/Kolkata

(34 rows)



postgres=#






Regards,

*Sachin Srivastava*
Assistant Technical Lead(Oracle/PostgreSQL)| TSG
*Cyient* | www.cyient.com


Re: [GENERAL] Memory Utilization Issue

2015-05-20 Thread Mathew Moon
What is the output of 'free -m' ? Look at the third column second row (+/- 
cache). This is the most relevant number. Even if you are swapping that can 
happen with plenty of RAM available if 'swappiness' is set too low. 

Sent from my iPhone

 On May 20, 2015, at 2:25 AM, Sachin Srivastava ssr.teleat...@gmail.com 
 wrote:
 
 Dear Concern,
  
 Always my server memory utilization is remain 99%. I have 4 DB server and 
 RAM of the server is (32 GB, 64 GB, 64 GB and 132 GB). In every server always 
 we are getting the memory utilization  99%. Kindly suggest why this problem 
 is and which parameter will resolve this problem.
 
 I am using PostgreSQL 9.1.2 and PostGIS 1.5 version on every Red hat Linux 
 server. Kindly find the TOP result, ulimit -a  result, (kernel.shmax, 
 kernel.shmall, kernel.sem value) and pg_setting result of one server as 
 below. If you require any other information then inform to me.
  
  
 [root@CPPMOMA_DB01 ~]# top
 top - 12:08:08 up 15 days, 15:45,  5 users,  load average: 1.05, 1.19, 1.35
 Tasks: 605 total,   1 running, 604 sleeping,   0 stopped,   0 zombie
 Cpu(s):  6.9%us,  2.1%sy,  0.0%ni, 78.3%id, 12.7%wa,  0.0%hi,  0.1%si,  0.0%st
 Mem:  32832364k total, 32621168k used,   211196k free,77572k buffers
 
 
 [root@CPPMOMA_DB01 ~]# ulimit -a
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 256323
 max locked memory   (kbytes, -l) 64
 max memory size (kbytes, -m) unlimited
 open files  (-n) 4096
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 256323
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 [root@CPPMOMA_DB01 ~]#
 
 
 kernel.shmmax = 32212254720
 
 kernel.shmall = 1073741824
 
 kernel.sem = 250 32000 100 384
 
  
 postgres=# SELECT name, source, setting FROM pg_settings WHERE source != 
 'default' AND source != 'override' ORDER by 2, 1;
 
 name|source| setting
 +--+--
  application_name   | client   | psql.bin
  client_encoding| client   | UTF8
  archive_command| configuration file   | cp %p 
 /dbkup/momacpp_213_live/%f
  archive_mode   | configuration file   | on
  autovacuum | configuration file   | on
  autovacuum_max_workers | configuration file   | 3
  checkpoint_segments| configuration file   | 200
  checkpoint_timeout | configuration file   | 300
  checkpoint_warning | configuration file   | 30
  DateStyle  | configuration file   | ISO, MDY
  default_text_search_config | configuration file   | pg_catalog.english
  effective_cache_size   | configuration file   | 524288
  lc_messages| configuration file   | en_US.UTF-8
  lc_monetary| configuration file   | en_US.UTF-8
  lc_numeric | configuration file   | en_US.UTF-8
  lc_time| configuration file   | en_US.UTF-8
  listen_addresses   | configuration file   | *
  log_destination| configuration file   | stderr
  log_directory  | configuration file   | pg_log
  logging_collector  | configuration file   | on
  log_line_prefix| configuration file   | %t
  log_rotation_age   | configuration file   | 1440
  maintenance_work_mem   | configuration file   | 1638400
  max_connections| configuration file   | 2000
  max_files_per_process  | configuration file   | 2000
  max_wal_senders| configuration file   | 5
  port   | configuration file   | 5432
  shared_buffers | configuration file   | 1572864
  temp_buffers   | configuration file   | 4096
  wal_level  | configuration file   | archive
  work_mem   | configuration file   | 32768
  log_timezone   | environment variable | Asia/Kolkata
  max_stack_depth| environment variable | 2048
  TimeZone   | environment variable | Asia/Kolkata
 (34 rows)
  
 postgres=#
  
 
  
  
 Regards,
 Sachin Srivastava
 Assistant Technical Lead(Oracle/PostgreSQL)| TSG
 Cyient | www.cyient.com


Re: [GENERAL] Memory Utilization Issue

2015-05-20 Thread Naveed Shaikh
Could you also please check the Transparent huge page(THP) are enabled on
the server or not, they can also result in intermittent poor performance
along with high system cpu time counted against the database processes.

This can be confirmed by below command:

cat /sys/kernel/mm/redhat_transparent_hugepage/enabled

Here always is selected, showing THP are enabled, this needs to be
disable with following command:

echo never  /sys/kernel/mm/redhat_transparent_hugepage/enabled

Thanks  Regards,
Naveed Shaikh





On Wed, May 20, 2015 at 12:55 PM, Sachin Srivastava ssr.teleat...@gmail.com
 wrote:

 Dear Concern,


 Always my server memory utilization is remain 99%. I have 4 DB server and
 RAM of the server is (32 GB, 64 GB, 64 GB and 132 GB*). In every server
 always we are getting the memory utilization  99%. *Kindly suggest why
 this problem is and which parameter will resolve this problem.

 I am using PostgreSQL 9.1.2 and PostGIS 1.5 version on every Red hat Linux
 server. Kindly find the TOP result, ulimit -a  result,
 (kernel.shmax, kernel.shmall, kernel.sem value) and pg_setting result
 of one server as below. If you require any other information then inform to
 me.





 [root@CPPMOMA_DB01 ~]# top

 top - 12:08:08 up 15 days, 15:45,  5 users,  load average: 1.05, 1.19,
 1.35

 Tasks: 605 total,   1 running, 604 sleeping,   0 stopped,   0 zombie

 Cpu(s):  6.9%us,  2.1%sy,  0.0%ni, 78.3%id, 12.7%wa,  0.0%hi,  0.1%si,
 0.0%st
 Mem:  32832364k total, 32621168k used,   211196k free,77572k buffers


 [root@CPPMOMA_DB01 ~]# ulimit -a
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 256323
 max locked memory   (kbytes, -l) 64
 max memory size (kbytes, -m) unlimited
 open files  (-n) 4096
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 256323
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 [root@CPPMOMA_DB01 ~]#


 kernel.shmmax = 32212254720

 kernel.shmall = 1073741824

 kernel.sem = 250 32000 100 384


 postgres=# SELECT name, source, setting FROM pg_settings WHERE source !=
 'default' AND source != 'override' ORDER by 2, 1;

 name|source| setting


 +--+--

  application_name   | client   | psql.bin

  client_encoding| client   | UTF8

  archive_command| configuration file   | cp %p
 /dbkup/momacpp_213_live/%f

  archive_mode   | configuration file   | on

  autovacuum | configuration file   | on

  autovacuum_max_workers | configuration file   | 3

  checkpoint_segments| configuration file   | 200

  checkpoint_timeout | configuration file   | 300

  checkpoint_warning | configuration file   | 30

  DateStyle  | configuration file   | ISO, MDY

  default_text_search_config | configuration file   | pg_catalog.english

  effective_cache_size   | configuration file   | 524288

  lc_messages| configuration file   | en_US.UTF-8

  lc_monetary| configuration file   | en_US.UTF-8

  lc_numeric | configuration file   | en_US.UTF-8

  lc_time| configuration file   | en_US.UTF-8

  listen_addresses   | configuration file   | *

  log_destination| configuration file   | stderr

  log_directory  | configuration file   | pg_log

  logging_collector  | configuration file   | on

  log_line_prefix| configuration file   | %t

  log_rotation_age   | configuration file   | 1440

  maintenance_work_mem   | configuration file   | 1638400

  max_connections| configuration file   | 2000

  max_files_per_process  | configuration file   | 2000

  max_wal_senders| configuration file   | 5

  port   | configuration file   | 5432

  shared_buffers | configuration file   | 1572864

  temp_buffers   | configuration file   | 4096

  wal_level  | configuration file   | archive

  work_mem   | configuration file   | 32768

  log_timezone   | environment variable | Asia/Kolkata

  max_stack_depth| environment variable | 2048

  TimeZone   | environment variable | Asia/Kolkata

 (34 rows)



 postgres=#






 Regards,

 *Sachin Srivastava*
 Assistant Technical Lead(Oracle/PostgreSQL)| TSG
 *Cyient* | www.cyient.com



Re: [GENERAL] Memory Utilization Issue

2015-05-20 Thread Albe Laurenz
Sachin Srivastava wrote:
 Always my server memory utilization is remain 99%. I have 4 DB server and 
 RAM of the server is (32
 GB, 64 GB, 64 GB and 132 GB). In every server always we are getting the 
 memory utilization  99%.
 Kindly suggest why this problem is and which parameter will resolve this 
 problem.
 
 I am using PostgreSQL 9.1.2 and PostGIS 1.5 version on every Red hat Linux 
 server. Kindly find the
 TOP result, ulimit -a  result, (kernel.shmax, kernel.shmall, 
 kernel.sem value) and
 pg_setting result of one server as below. If you require any other 
 information then inform to me.

That sounds just fine.
Linux uses memory for the file system cache.  That memory is shown as in use, 
but
it is available for processes if they need it.

It doesn't look like your machine is swapping.

Do you experience problems?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory management in postgres (with liblwgeom functions in particular)

2015-03-30 Thread Igor Stassiy
Hello,

I am developing a C++ extension (most of the code is C++) for postgres that
links dynamically with liblwgeom, without linking to postgis. I call
liblwgeom functions that serialize/deserialize LWGEOM* (and similar
structures) that don't need a backend like GEOS.

I wonder how is the memory freed when we call lwerror, as the transaction
will be terminated (elog/ereport create a long jump), so if I call
liblwgeoms functions from within C++, the stack will not be unwind and even
if I use smart pointers it wouldn't make a difference (right?).

On the contrary, when Postgis module loads itself, in _PG_init it overrides
memory allocation functions of liblwgeom with pg_alloc and pg_free. Which
in turn call palloc and pfree. And in this case when we call lwerror, the
memory that we allocated is freed automatically (right?).

I guess (just a guess) it has something to do with the memory context and
when a memory context is closed the entire memory allocated within would
be freed. But lwalloc by default is malloc, so does Postgres do something
extremely clever like overriding malloc with its palloc?

Thank you,
Igor


Re: [GENERAL] memory leak while trying to update/alter column in postgresql

2014-07-05 Thread Michael Paquier
On Sat, Jul 5, 2014 at 11:06 AM, Madhurima Das madhurima@gmail.com wrote:
 int main()
 {
 PGconn *conn;
PGresult *res;
 int i=0,nFields=0,row=0,col=0;

 conn = PQconnectdb(dbname=test1 host=localhost user=postgres
 password=yyy);
if(PQstatus(conn) == CONNECTION_BAD)
{
 fprintf(stderr, Connection to database \%s\ failed.\n,
 PQerrorMessage(conn));
 fprintf(stderr, %s, PQerrorMessage(conn));
 exit_nicely(conn);
 }

 res = PQexec(conn, IF COL_LENGTH('xxx','comment') IS NULL);
Call to PQclear is missing here or before the next call to PQexec.

 if(res)
 res = PQexec(conn, ALTER TABLE xxx ADD comment VARCHAR(500));
else
 res = PQexec(conn, UPDATE TABLE xxx ADD comment VARCHAR(500));
Same here.

 res = PQexec(conn, IF COL_LENGTH('xxx','id') IS NULL);
Same here.

 if(res)
 res = PQexec(conn, ALTER TABLE xxx ADD id VARCHAR(50));
else
 res = PQexec(conn, UPDATE TABLE xxx ADD id VARCHAR(50));
Same here.

   res = PQexec(conn, SELECT * FROM xxx);
 if((!res) || (PQresultStatus(res) != PGRES_TUPLES_OK))
 {
 fprintf(stderr, SELECT command did not return tuples properly\n);
 PQclear(res);
 }

 PQclear(res);
 PQfinish(conn);

 return 0;
 }
Btw, there is a lot of code duplication...
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] memory leak while trying to update/alter column in postgresql

2014-07-04 Thread Madhurima Das
I am trying to update/alter two columns-comment and id in a table called
xxx using postgresql. Though the compiler does not show any errors but
Valgrind gives major memory leak errors. The code is:

int main()
{
PGconn *conn;
PGresult *res;
int i=0,nFields=0,row=0,col=0;

conn = PQconnectdb(dbname=test1 host=localhost user=postgres
password=yyy);
if(PQstatus(conn) == CONNECTION_BAD)
{
fprintf(stderr, Connection to database \%s\ failed.\n,
PQerrorMessage(conn));
fprintf(stderr, %s, PQerrorMessage(conn));
exit_nicely(conn);
}

res = PQexec(conn, IF COL_LENGTH('xxx','comment') IS NULL);
if(res)
res = PQexec(conn, ALTER TABLE xxx ADD comment VARCHAR(500));
else
res = PQexec(conn, UPDATE TABLE xxx ADD comment VARCHAR(500));

res = PQexec(conn, IF COL_LENGTH('xxx','id') IS NULL);
if(res)
res = PQexec(conn, ALTER TABLE xxx ADD id VARCHAR(50));
else
res = PQexec(conn, UPDATE TABLE xxx ADD id VARCHAR(50));

  res = PQexec(conn, SELECT * FROM xxx);
if((!res) || (PQresultStatus(res) != PGRES_TUPLES_OK))
{
fprintf(stderr, SELECT command did not return tuples properly\n);
PQclear(res);
}

PQclear(res);
PQfinish(conn);

return 0;
}

Valgrind gives the following errors:

$ valgrind --track-origins=yes --leak-check=full ./Gwidd_uniprot_map2 xaa

==4525== Memcheck, a memory error detector
==4525== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==4525== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info
==4525== Command: ./Gwidd_uniprot_map2 xaa
==4525==
--4525-- ./Gwidd_uniprot_map2:
--4525-- dSYM directory has wrong UUID; consider using --dsymutil=yes
==4525==
==4525== HEAP SUMMARY:
==4525== in use at exit: 262,994 bytes in 751 blocks
==4525==   total heap usage: 1,012 allocs, 261 frees, 345,158 bytes
allocated
==4525==
==4525== 2,248 (200 direct, 2,048 indirect) bytes in 1 blocks are
definitely lost in loss record 400 of 414
==4525==at 0x100011345: malloc (vg_replace_malloc.c:236)
==4525==by 0x10007B7E0: PQmakeEmptyPGresult (in /usr/lib/libpq.5.dylib)
==4525==by 0x10008DF30: pqGetErrorNotice3 (in /usr/lib/libpq.5.dylib)
==4525==by 0x10008CC00: pqParseInput3 (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007ECD9: parseInput (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007EE3F: PQgetResult (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007F6F1: PQexecFinish (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007F1DC: PQexec (in /usr/lib/libpq.5.dylib)
==4525==by 0x1199A: main (in ./Gwidd_uniprot_map2)
==4525==
==4525== 2,248 (200 direct, 2,048 indirect) bytes in 1 blocks are
definitely lost in loss record 401 of 414
==4525==at 0x100011345: malloc (vg_replace_malloc.c:236)
==4525==by 0x10007B7E0: PQmakeEmptyPGresult (in /usr/lib/libpq.5.dylib)
==4525==by 0x10008DF30: pqGetErrorNotice3 (in /usr/lib/libpq.5.dylib)
==4525==by 0x10008CC00: pqParseInput3 (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007ECD9: parseInput (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007EE3F: PQgetResult (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007F6F1: PQexecFinish (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007F1DC: PQexec (in /usr/lib/libpq.5.dylib)
==4525==by 0x119B5: main (in ./Gwidd_uniprot_map2)
==4525==
==4525== 2,248 (200 direct, 2,048 indirect) bytes in 1 blocks are
definitely lost in loss record 402 of 414
==4525==at 0x100011345: malloc (vg_replace_malloc.c:236)
==4525==by 0x10007B7E0: PQmakeEmptyPGresult (in /usr/lib/libpq.5.dylib)
==4525==by 0x10008DF30: pqGetErrorNotice3 (in /usr/lib/libpq.5.dylib)
==4525==by 0x10008CC00: pqParseInput3 (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007ECD9: parseInput (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007EE3F: PQgetResult (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007F6F1: PQexecFinish (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007F1DC: PQexec (in /usr/lib/libpq.5.dylib)
==4525==by 0x119DF: main (in ./Gwidd_uniprot_map2)
==4525==
==4525== 2,248 (200 direct, 2,048 indirect) bytes in 1 blocks are
definitely lost in loss record 403 of 414
==4525==at 0x100011345: malloc (vg_replace_malloc.c:236)
==4525==by 0x10007B7E0: PQmakeEmptyPGresult (in /usr/lib/libpq.5.dylib)
==4525==by 0x10008DF30: pqGetErrorNotice3 (in /usr/lib/libpq.5.dylib)
==4525==by 0x10008CC00: pqParseInput3 (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007ECD9: parseInput (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007EE3F: PQgetResult (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007F6F1: PQexecFinish (in /usr/lib/libpq.5.dylib)
==4525==by 0x10007F1DC: PQexec (in /usr/lib/libpq.5.dylib)
==4525==by 0x119FA: main (in ./Gwidd_uniprot_map2)
==4525==
==4525== LEAK SUMMARY:
==4525==definitely lost: 800 bytes in 4 blocks
==4525==indirectly lost: 8,192 bytes in 4 blocks
==4525==  possibly lost: 0 bytes in 0 blocks
==4525==still reachable: 254,002 bytes in 743 blocks
==4525== suppressed: 0 bytes in 0 blocks
==4525== Reachable blocks 

[GENERAL] Memory leak with CREATE TEMP TABLE ON COMMIT DROP?

2014-06-12 Thread Eric Ridge
# select version();
  version   
   
---
 PostgreSQL 9.3.4 on x86_64-apple-darwin13.2.0, compiled by Apple LLVM version 
5.1 (clang-503.0.40) (based on LLVM 3.4svn), 64-bit
(1 row)

As best I can guess, Postgres has some kind of memory leak around (at least) 
temporary tables flagged to drop on commit.  It's fairly easy to reproduce:

Terminal A
--
$ createdb leak
$ for i in $(seq 1 100) ; do echo begin; create temp table foo() on commit 
drop; commit;; done | psql leak  /dev/null

Terminal B
--
$ while(true); do ps auwx | grep $PID_OF_POSTGRES_PROCESS_FROM_TERMINAL_A; 
sleep 1 ; done

And watch the RSS size continue to climb, fairly quickly.  This happens on both 
OS X and Linux (both x86_64).

We ran into this thanks to an update trigger that created a temporary table 
with on commit drop where we were literally updating millions of rows as atomic 
transactions, across about 100 concurrent connections, firing the trigger for 
each atomic update.  The server quickly ran out of memory.

It took some time to find what appears to be the actual problem, but I think 
this is it.  We've since rewritten the trigger to avoid using a temporary table 
(probably a good thing anyways) and all is well, but I was very shocked to see 
Postgres behaving badly here.

Any thoughts?  And thanks for your time!

eric




PROPRIETARY AND COMPANY CONFIDENTIAL COMMUNICATIONS
The information contained in this communication is intended only for
the use of the addressee. Any other use is strictly prohibited.
Please notify the sender if you have received this message in error.
This communication is protected by applicable legal privileges and is
company confidential.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory leak with CREATE TEMP TABLE ON COMMIT DROP?

2014-06-12 Thread Tom Lane
Eric Ridge e_ri...@tcdi.com writes:
 As best I can guess, Postgres has some kind of memory leak around (at least) 
 temporary tables flagged to drop on commit.  It's fairly easy to reproduce:

I don't see any memory leak with this example.

What I do see is the process's use of shared memory grows slowly until it
reaches 128MB (or whatever you've set shared_buffers to), and then stops.
This is normal behavior and has nothing to do with any memory leak: what
causes that is that top and related tools typically don't count shared
memory pages in a process's memory footprint until the process has
actually touched those pages.  So as the process gradually touches more
and more of the shared buffers, its memory usage appears to climb, but
that's completely an illusion of the viewing tool.  The important point
is the process's private memory usage, and that's not growing at all
AFAICS --- it's steady at a small number of megabytes.

Since you say you had an out-of-memory failure, you may well have had
some kind of leak in your original situation, but this test case isn't
reproducing it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] memory leaking ?

2014-05-28 Thread Zhemin Zhou

Hi all

We have been using postgreSQL version 8.1.11 for 6 years and have to 
move our website to another server recently. We did not manage to get 
the version 8.1.11 thus have to use version 8.4.12 in  the new server. 
This version is still quite old but we cannot make the databases running 
in any of newer versions.


We met a problem after running the website for one week. We used a 
function to convert and save binary files into the database (as bytea). 
This function worked well in the old version but sometimes makes the new 
version of postgres crash. This random crash is not file specific. 
Sometimes, the function works well in all binary files but makes crashes 
again and again at other times.


The OS system in the new server is: Ubuntu 12.04.2.

The error messages are:
2(56608) SELECT: malloc.c:2451: sYSMALLOc: Assertion `(old_top == 
(((mbinptr) (((char *) ((av)-bins[((1) - 1) * 2])) - 
__builtin_offsetof (struct malloc_chunk, fd  old_size == 0) || 
((unsigned long) (old_size) = (unsigned long)(
(((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 * 
(sizeof(size_t))) - 1))  ~((2 * (sizeof(size_t))) - 1)))  
((old_top)-size  0x1)  ((unsigned long)old_end  pagemask) == 0)' 
failed.

LOG:  server process (PID 8866) was terminated by signal 6: Aborted
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted; last known up at 2014-05-26 
10:53:57 BST
LOG:  database system was not properly shut down; automatic recovery in 
progress

LOG:  redo starts at 1/D2BC56A0
LOG:  record with zero length at 1/D2BCC600
LOG:  redo done at 1/D2BCC5C8
LOG:  last completed transaction was at log time 2014-05-26 
11:31:07.172215+01

LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started



And here is the C function, which was written at about 10 years ago and 
was compiled by gcc -O2 -shared -fPIC convert_bin_to_64.c -o 
convert_bin_to_64.so.


#include postgres.h
#include fmgr.h
#include string.h
PG_MODULE_MAGIC;

bytea *convert_bin_to_64(bytea *t)
/* unsingned char weil Sonderzeichen auch negativ sein k?nnen*/
{
int ps,v0,v1,v2,v3,vl,ps2;
char charset[70];
int size=VARSIZE(t)-VARHDRSZ; /* VARSIZE and VARHDRSZ are 
PostgreSql Macros*/
bytea *result = (bytea *) 
palloc(VARHDRSZ+sizeof(char)*(4*(size)/3+15));



Re: [GENERAL] memory leaking ?

2014-05-28 Thread Tom Lane
Zhemin Zhou zhemin.z...@warwick.ac.uk writes:
 We met a problem after running the website for one week. We used a 
 function to convert and save binary files into the database (as bytea). 
 This function worked well in the old version but sometimes makes the new 
 version of postgres crash. This random crash is not file specific. 

AFAICT, it's pure luck that it didn't crash the older system too.  You're
allocating the output buffer too small, at least for cases where size
isn't a multiple of 3:

  bytea *result = (bytea *) 
 palloc(VARHDRSZ+sizeof(char)*(4*(size)/3+15));

For example, if size = 2, 4*2/3 is only 2, but the loop will write 4 bytes
of data.  So the function sometimes clobbers bytes beyond what it
allocated, which unsurprisingly corrupts malloc's data structures.
You need to round up not truncate in this division.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] memory issues with BYTEAs in JSON and hstore

2014-05-27 Thread Felix Kunde
Hey,
I've developed a database auditing/versioning using the JSON data type 
(http://github.com/fxku/audit) and doing some tests now. 
Unfortunately I'm facing some problems when dealing with tables that store 
images as BYTEA. Some of them are around 15 MB big.

My tool logs changes to JSON and can populate it back to views with 
json_populate_recordset and json_agg. When performing this procedure on tables 
containing BYTEAs I receive an 54000 error (Cannot enlarge string buffer). Is 
this because of json_agg or because of single rows? 

Executing to_json on the whole column that contains the binary data (size above 
500 MB) lead to out-of-memory errors. The same goes for hstore. Executing these 
functions only on the biggest image was successful but freezed my pgAdmin. When 
I encoded BYTEA to TEXT before transforming it to JSON or hstore it worked. But 
trying json_populate_recordset still runs into memory problems (but explain 
worked). 

Do you think JSONB will solve my problems in the future?

Here is also a comparison in size between the bytea (and encoded versions to 
TEXT) and JSON / hstore output which I found kinda interesting:

operation | bytea | bytea-'escape' | bytea-'hex' | bytea-'base64'
--|---|-|--|
  | 15 MB | 40 MB   | 31 MB| 21 MB
to_json() | 57 MB | 57 MB   | 31 MB| 21 MB
hstore()  | 46 MB | 40 MB   | 31 MB| 21 MB

Thanks in advance for any hints.
Cheers,
Felix


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per postmaster process

2013-11-03 Thread Grzegorz Tańczyk

On 11/02/2013 08:47 PM, Tom Lane wrote:

Author: Tom Lanet...@sss.pgh.pa.us
Branch: master Release: REL9_1_BR [3e5f9412d] 2010-10-06 19:31:05 -0400

 Reduce the memory requirement for large ispell dictionaries.

I checked this patch:
http://www.postgresql.org/message-id/AANLkTi=4fUi1zoFMpZ==yf14rjdv_g1xgakvqmdye...@mail.gmail.com

I can't find it here:
http://doxygen.postgresql.org/spell_8c_source.html

I also don't see those changes in 9.3.1 source. Status in commitfest 
list is  Committed. I can't see hold_memory anywhere.



If you're not using ispell, it's not relevant, and I'm not sure whether
the savings were significant for anything but Czech.

I am using ispell.

Thanks

--
Regards,
  Grzegorz


Re: [GENERAL] Memory usage per postmaster process

2013-11-03 Thread Tom Lane
=?UTF-8?B?R3J6ZWdvcnogVGHFhGN6eWs=?= golia...@polzone.pl writes:
 On 11/02/2013 08:47 PM, Tom Lane wrote:
 Author: Tom Lanet...@sss.pgh.pa.us
 Branch: master Release: REL9_1_BR [3e5f9412d] 2010-10-06 19:31:05 -0400
 
 Reduce the memory requirement for large ispell dictionaries.
 I checked this patch:
 http://www.postgresql.org/message-id/AANLkTi=4fUi1zoFMpZ==yf14rjdv_g1xgakvqmdye...@mail.gmail.com

 I can't find it here:
 http://doxygen.postgresql.org/spell_8c_source.html

 I also don't see those changes in 9.3.1 source. Status in commitfest 
 list is  Committed. I can't see hold_memory anywhere.

If you read the rest of the discussion of the patch, you'd find out that
what got committed was not all that much like Pavel's original.  But
it has the same effect.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory usage per postmaster process

2013-11-02 Thread Grzegorz Tańczyk

Hello,

Recently I had some problem with tsearch2 in postgres 8.3. Application 
was using JDBC connection pool and it was possible that each connection 
from the pool used tsearch2(with dictionary) in some query. This 
resulted in almost every postmaster process using 0.5g memory.
Solution of this particular problem was simple: create separate small 
pool for tsearch2 related queries.


Unfortunately some other queries make postmaster processes grow and 
while it does not make much problems in production environment, it makes 
difficult to set up dev instance, because application and postgres take 
more RAM than I currently have. 4GB memory in swap kills everything even 
though I have it in software raid0 with 6 disks. I run postgres and 
application in same LXC instance.


Now I could play around with application code even more, but it is old 
and ugly, so instead I am looking at solution in postgresql.conf, which 
I don't fully understand.


Postgres upgrade could also be some solution, but was there any 
significant improvement in system resources management in 8.X? Upgrade 
to 9.X might be too big effort at this time. I want to avoid problems 
with compatiblity, because it will take a lot of time to track them in 
application code which is very low quality.


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool.


Thanks!

--
Regards,
  Grzegorz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory usage per postmaster process

2013-11-02 Thread Grzegorz Tańczyk

Hello,

Recently I had some problem with tsearch2 in postgres 8.3. Application 
was using JDBC connection pool and it was possible that each connection 
from the pool used tsearch2(with dictionary) in some query. This 
resulted in almost every postmaster process using 0.5g memory.
Solution of this particular problem was simple: create separate small 
pool for tsearch2 related queries.


Unfortunately some other queries make postmaster processes grow and 
while it does not make much problems in production environment, it makes 
difficult to set up dev instance, because application and postgres take 
more RAM than I currently have. 4GB memory in swap kills everything even 
though I have it in software raid0 with 6 disks. I run postgres and 
application in same LXC instance.


Now I could play around with application code even more, but it is old 
and ugly, so instead I am looking at solution in postgresql.conf, which 
I don't fully understand.


Postgres upgrade could also be some solution, but was there any 
significant improvement in system resources management in 8.X? Upgrade 
to 9.X might be too big effort at this time. I want to avoid problems 
with compatiblity, because it will take a lot of time to track them in 
application code which is very low quality.


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool.


Thanks!

--
Regards,
  Grzegorz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per postmaster process

2013-11-02 Thread John R Pierce

On 11/2/2013 11:03 AM, Grzegorz Tańczyk wrote:


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool. 


the size of your connection pool shouldn't be much more than 2-3 times 
the CPU core count on the server for optimal throughput... 100 queries 
running at once will grind ANY non-monster server to a standstill





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [100% SPAM] Re: [GENERAL] Memory usage per postmaster process

2013-11-02 Thread Grzegorz Tańczyk

On 11/02/2013 07:47 PM, John R Pierce wrote:

On 11/2/2013 11:03 AM, Grzegorz Tańczyk wrote:


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool. 


the size of your connection pool shouldn't be much more than 2-3 times 
the CPU core count on the server for optimal throughput... 100 queries 
running at once will grind ANY non-monster server to a standstill
In fact thats what happened when tsearch2 problem occured even though 
there was only few queries running at once. Group of idle connections 
was using resources and that's the thing I don't understand.
Did tsearch2 dictionary caching implementation improve after 8.3 on this 
matter?


Making small connection pool will help, however how small should it be? 
1 connection max, 0 connections minimum? Connections will get closed 
after they are released by application code, but still there will be 
some group of postmaster processes and how can I be sure if none of them 
will get 1gb of system memory? I can't have any control over this (other 
than grepping ps output and manually pg_cancel_backend them once they 
grow too much).


Thanks!

--
Regards,
  Grzegorz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [100% SPAM] Re: [GENERAL] Memory usage per postmaster process

2013-11-02 Thread Tom Lane
=?UTF-8?B?R3J6ZWdvcnogVGHFhGN6eWs=?= golia...@polzone.pl writes:
 Did tsearch2 dictionary caching implementation improve after 8.3 on this 
 matter?

Well, there was this:

Author: Tom Lane t...@sss.pgh.pa.us
Branch: master Release: REL9_1_BR [3e5f9412d] 2010-10-06 19:31:05 -0400

Reduce the memory requirement for large ispell dictionaries.

This patch eliminates per-chunk palloc overhead for most small allocations
needed in the representation of an ispell dictionary.  This saves close to
a factor of 2 on the current Czech ispell data.  While it doesn't cover
every last small allocation in the ispell code, we are at the point of
diminishing returns, because about 95% of the allocations are covered
already.

Pavel Stehule, rather heavily revised by Tom

If you're not using ispell, it's not relevant, and I'm not sure whether
the savings were significant for anything but Czech.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Issue with array_agg?

2013-08-21 Thread Robert Sosinski
Hi Pavel,

Here are the explains you asked for:

explain analyze select string_agg(id::text,',') from things group by guid;
QUERY PLAN

--
 GroupAggregate  (cost=400357.78..433784.93 rows=1337086 width=37) (actual
time=41434.485..53195.185 rows=2378626 loops=1)
   -  Sort  (cost=400357.78..403700.49 rows=1337086 width=37) (actual
time=41434.433..44992.736 rows=2378626 loops=1)
 Sort Key: guid
 Sort Method: quicksort  Memory: 284135kB
 -  Seq Scan on things  (cost=0.00..264304.86 rows=1337086
width=37) (actual time=0.027..21429.179 rows=2378626 loops=1)
 Total runtime: 56295.362 ms
(6 rows)


explain analyze select array_agg(id::text) from things group by guid;
   QUERY PLAN

-
 GroupAggregate  (cost=400357.78..433784.93 rows=1337086 width=37) (actual
time=23953.922..38157.059 rows=2378626 loops=1)
   -  Sort  (cost=400357.78..403700.49 rows=1337086 width=37) (actual
time=23953.847..27527.316 rows=2378626 loops=1)
 Sort Key: guid
 Sort Method: quicksort  Memory: 284135kB
 -  Seq Scan on things  (cost=0.00..264304.86 rows=1337086
width=37) (actual time=0.007..4941.752 rows=2378626 loops=1)
 Total runtime: 41280.897 ms
(6 rows)



These seem to be running on the machine now, and the memory is not
inflating, I just run this one, and it blew up.

explain with t as (select id, guid, md5(concat_ws(':', fields - 'a',
fields - 'b', fields - 'c', fields - 'd', fields - 'e', foo_id::text))
from things) select md5, count(id), array_agg(id) from t group by 1 having
count(id)  1;

-Robert


On Tue, Aug 20, 2013 at 1:53 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Can you send a EXPLAIN result in both use cases?

 Pavel


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 At the moment, all guids are distinct, however before I zapped the
 duplicates, there were 280 duplicates.

 Currently, there are over 2 million distinct guids.

 -Robert


 On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:




 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I
 have in the table, but I can give you an example query and the schema
 attached below.  From there, I would just put in 2 million rows worth 1.2
 Gigs of data.  Average size of the the extended columns (using the
 pg_column_size function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


 how much distinct guid is there, and how much duplicates

 ??

 regards

 Pavel




 example schema:
  Table public.things

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
  | plain|  |
  created_at | timestamp without time zone | not null
  | plain|  |
  updated_at | timestamp without time zone | not null
  | plain|  |
  foo_id | integer | not null
  | plain|  |
  bar_id | integer | not null
  | plain|  |
  baz_id | integer | not null
  | plain|  |
  guid   | character varying   | not null
  | extended |  |
  name   | character varying   | not null
  | extended |  |
  price  | numeric(12,2)   | not null
  | main |  |
  currency   | character varying   | not null
  | extended |  |
  amount | integer | not null
  | plain|  |
  the_date   | date| not null
  | plain|  |
  fields | hstore  |
 | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree 

Re: [GENERAL] Memory Issue with array_agg?

2013-08-20 Thread Pavel Stehule
Hello

It is strange. I am trying to simulate it without success. On 1 M rows
where every id is 2 times duplicated

processing string_agg .. cca 30MB

processing array_agg cca 32MB

postgres=# create table foo(a int, b varchar);
CREATE TABLE
postgres=# insert into foo select i, md5(i::text) from
generate_series(1,100) g(i);
INSERT 0 100
postgres=# insert into foo select i, md5(i::text) from
generate_series(1,100) g(i);
INSERT 0 100
postgres=# CREATE INDEX on foo(b);
CREATE INDEX
postgres=# ANALYZE foo;
ANALYZE
postgres=# explain analyze select string_agg(a::text,',') from foo group by
b;
 QUERY
PLAN

 GroupAggregate  (cost=410045.19..447831.37 rows=1022895 width=37) (actual
time=10195.972..14993.493 rows=100 loops=1)
   -  Sort  (cost=410045.19..415045.19 rows=200 width=37) (actual
time=10195.944..13659.985 rows=200 loops=1)
 Sort Key: b
 Sort Method: external merge  Disk: 97768kB
 -  Seq Scan on foo  (cost=0.00..36667.00 rows=200 width=37)
(actual time=0.018..321.197 rows=200 loops=1)
 Total runtime: 15066.397 ms
(6 rows)

postgres=# explain analyze select array_agg(a::text) from foo group by b;
 QUERY
PLAN

 GroupAggregate  (cost=410045.19..447831.37 rows=1022895 width=37) (actual
time=10062.095..15697.755 rows=100 loops=1)
   -  Sort  (cost=410045.19..415045.19 rows=200 width=37) (actual
time=10062.059..13613.300 rows=200 loops=1)
 Sort Key: b
 Sort Method: external merge  Disk: 97768kB
 -  Seq Scan on foo  (cost=0.00..36667.00 rows=200 width=37)
(actual time=0.029..311.423 rows=200 loops=1)
 Total runtime: 15799.226 ms
(6 rows)

Regards

Pavel


2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 At the moment, all guids are distinct, however before I zapped the
 duplicates, there were 280 duplicates.

 Currently, there are over 2 million distinct guids.

 -Robert


 On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:




 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I
 have in the table, but I can give you an example query and the schema
 attached below.  From there, I would just put in 2 million rows worth 1.2
 Gigs of data.  Average size of the the extended columns (using the
 pg_column_size function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


 how much distinct guid is there, and how much duplicates

 ??

 regards

 Pavel




 example schema:
  Table public.things

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
| plain|  |
  created_at | timestamp without time zone | not null
| plain|  |
  updated_at | timestamp without time zone | not null
| plain|  |
  foo_id | integer | not null
| plain|  |
  bar_id | integer | not null
| plain|  |
  baz_id | integer | not null
| plain|  |
  guid   | character varying   | not null
| extended |  |
  name   | character varying   | not null
| extended |  |
  price  | numeric(12,2)   | not null
| main |  |
  currency   | character varying   | not null
| extended |  |
  amount | integer | not null
| plain|  |
  the_date   | date| not null
| plain|  |
  fields | hstore  |
 | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree (bar_id)
 things_baz_id_idx btree (baz_id)
   

Re: [GENERAL] Memory Issue with array_agg?

2013-08-20 Thread Pavel Stehule
Can you send a EXPLAIN result in both use cases?

Pavel


2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 At the moment, all guids are distinct, however before I zapped the
 duplicates, there were 280 duplicates.

 Currently, there are over 2 million distinct guids.

 -Robert


 On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:




 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I
 have in the table, but I can give you an example query and the schema
 attached below.  From there, I would just put in 2 million rows worth 1.2
 Gigs of data.  Average size of the the extended columns (using the
 pg_column_size function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


 how much distinct guid is there, and how much duplicates

 ??

 regards

 Pavel




 example schema:
  Table public.things

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
| plain|  |
  created_at | timestamp without time zone | not null
| plain|  |
  updated_at | timestamp without time zone | not null
| plain|  |
  foo_id | integer | not null
| plain|  |
  bar_id | integer | not null
| plain|  |
  baz_id | integer | not null
| plain|  |
  guid   | character varying   | not null
| extended |  |
  name   | character varying   | not null
| extended |  |
  price  | numeric(12,2)   | not null
| main |  |
  currency   | character varying   | not null
| extended |  |
  amount | integer | not null
| plain|  |
  the_date   | date| not null
| plain|  |
  fields | hstore  |
 | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree (bar_id)
 things_baz_id_idx btree (baz_id)
 things_guid_uidx UNIQUE, btree (guid)
 things_lpad_lower_name_eidx btree (lpad(lower(name::text), 10,
 '0'::text))
 things_price_idx btree (price)

 Foreign-key constraints:
 things_foo_id_fkey FOREIGN KEY (foo_id) REFERENCES foos(id)
 things_bar_id_fkey FOREIGN KEY (bar_id) REFERENCES bars(id)
 things_baz_id_fkey FOREIGN KEY (baz_id) REFERENCES bazs(id)
 Triggers:
 timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW
 EXECUTE PROCEDURE timestamps_tfun()

 Let me know if you need anything else.

 Thanks,


 On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:

 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has
 commanded this server process to roll back the current transaction and
 exit, because another server process exited abnormally and possibly
 corrupted shared memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be
 able to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed
 the query to use string_agg, it worked fine.  I also tried using array_agg
 on a few different queries, all yielding the same issue.  Swapping in
 string_agg fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and
 when I ran the query while viewing htop, the virtual size of the
 Postgres process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be 

Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Pavel Stehule
Hello

please, can you send some example or test?

Regards

Pavel Stehule


2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
 this server process to roll back the current transaction and exit, because
 another server process exited abnormally and possibly corrupted shared
 memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able
 to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed the
 query to use string_agg, it worked fine.  I also tried using array_agg on a
 few different queries, all yielding the same issue.  Swapping in string_agg
 fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and when I
 ran the query while viewing htop, the virtual size of the Postgres
 process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be much appreciated, thanks!

 -Robert



Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Robert Sosinski
Hi Pavel,

What kind of example do you need?  I cant give you the actual data I have
in the table, but I can give you an example query and the schema attached
below.  From there, I would just put in 2 million rows worth 1.2 Gigs of
data.  Average size of the the extended columns (using the pg_column_size
function) in bytes are:

guid: 33
name: 2.41
currency: 4
fields: 120.32

example query:

-- find duplicate records using a guid
select guid, array_agg(id) from orders group by guid;

example schema:
 Table public.things

   Column   |Type |  Modifiers
 | Storage  | Stats target | Description
+-+-+--+--+-
 id | integer | not null default
nextval('things_id_seq'::regclass) | plain|  |
 version| integer | not null
 | plain|  |
 created_at | timestamp without time zone | not null
 | plain|  |
 updated_at | timestamp without time zone | not null
 | plain|  |
 foo_id | integer | not null
 | plain|  |
 bar_id | integer | not null
 | plain|  |
 baz_id | integer | not null
 | plain|  |
 guid   | character varying   | not null
 | extended |  |
 name   | character varying   | not null
 | extended |  |
 price  | numeric(12,2)   | not null
 | main |  |
 currency   | character varying   | not null
 | extended |  |
 amount | integer | not null
 | plain|  |
 the_date   | date| not null
 | plain|  |
 fields | hstore  |
| extended |  |
Indexes:
things_pkey PRIMARY KEY, btree (id)
things_foo_id_idx btree (foo_id)
things_bar_id_idx btree (bar_id)
things_baz_id_idx btree (baz_id)
things_guid_uidx UNIQUE, btree (guid)
things_lpad_lower_name_eidx btree (lpad(lower(name::text), 10,
'0'::text))
things_price_idx btree (price)

Foreign-key constraints:
things_foo_id_fkey FOREIGN KEY (foo_id) REFERENCES foos(id)
things_bar_id_fkey FOREIGN KEY (bar_id) REFERENCES bars(id)
things_baz_id_fkey FOREIGN KEY (baz_id) REFERENCES bazs(id)
Triggers:
timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW EXECUTE
PROCEDURE timestamps_tfun()

Let me know if you need anything else.

Thanks,


On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
 this server process to roll back the current transaction and exit, because
 another server process exited abnormally and possibly corrupted shared
 memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able
 to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed
 the query to use string_agg, it worked fine.  I also tried using array_agg
 on a few different queries, all yielding the same issue.  Swapping in
 string_agg fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and when I
 ran the query while viewing htop, the virtual size of the Postgres
 process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be much appreciated, thanks!

 -Robert





Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Pavel Stehule
2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I have
 in the table, but I can give you an example query and the schema attached
 below.  From there, I would just put in 2 million rows worth 1.2 Gigs of
 data.  Average size of the the extended columns (using the pg_column_size
 function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


how much distinct guid is there, and how much duplicates

??

regards

Pavel




 example schema:
  Table public.things

Column   |Type |  Modifiers
  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
  | plain|  |
  created_at | timestamp without time zone | not null
  | plain|  |
  updated_at | timestamp without time zone | not null
  | plain|  |
  foo_id | integer | not null
  | plain|  |
  bar_id | integer | not null
  | plain|  |
  baz_id | integer | not null
  | plain|  |
  guid   | character varying   | not null
  | extended |  |
  name   | character varying   | not null
  | extended |  |
  price  | numeric(12,2)   | not null
  | main |  |
  currency   | character varying   | not null
  | extended |  |
  amount | integer | not null
  | plain|  |
  the_date   | date| not null
  | plain|  |
  fields | hstore  |
   | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree (bar_id)
 things_baz_id_idx btree (baz_id)
 things_guid_uidx UNIQUE, btree (guid)
 things_lpad_lower_name_eidx btree (lpad(lower(name::text), 10,
 '0'::text))
 things_price_idx btree (price)

 Foreign-key constraints:
 things_foo_id_fkey FOREIGN KEY (foo_id) REFERENCES foos(id)
 things_bar_id_fkey FOREIGN KEY (bar_id) REFERENCES bars(id)
 things_baz_id_fkey FOREIGN KEY (baz_id) REFERENCES bazs(id)
 Triggers:
 timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW EXECUTE
 PROCEDURE timestamps_tfun()

 Let me know if you need anything else.

 Thanks,


 On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
 this server process to roll back the current transaction and exit, because
 another server process exited abnormally and possibly corrupted shared
 memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able
 to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed
 the query to use string_agg, it worked fine.  I also tried using array_agg
 on a few different queries, all yielding the same issue.  Swapping in
 string_agg fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and when
 I ran the query while viewing htop, the virtual size of the Postgres
 process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be much appreciated, thanks!

 -Robert






Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Robert Sosinski
At the moment, all guids are distinct, however before I zapped the
duplicates, there were 280 duplicates.

Currently, there are over 2 million distinct guids.

-Robert


On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule pavel.steh...@gmail.comwrote:




 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I have
 in the table, but I can give you an example query and the schema attached
 below.  From there, I would just put in 2 million rows worth 1.2 Gigs of
 data.  Average size of the the extended columns (using the pg_column_size
 function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


 how much distinct guid is there, and how much duplicates

 ??

 regards

 Pavel




 example schema:
  Table public.things

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
| plain|  |
  created_at | timestamp without time zone | not null
| plain|  |
  updated_at | timestamp without time zone | not null
| plain|  |
  foo_id | integer | not null
| plain|  |
  bar_id | integer | not null
| plain|  |
  baz_id | integer | not null
| plain|  |
  guid   | character varying   | not null
| extended |  |
  name   | character varying   | not null
| extended |  |
  price  | numeric(12,2)   | not null
| main |  |
  currency   | character varying   | not null
| extended |  |
  amount | integer | not null
| plain|  |
  the_date   | date| not null
| plain|  |
  fields | hstore  |
   | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree (bar_id)
 things_baz_id_idx btree (baz_id)
 things_guid_uidx UNIQUE, btree (guid)
 things_lpad_lower_name_eidx btree (lpad(lower(name::text), 10,
 '0'::text))
 things_price_idx btree (price)

 Foreign-key constraints:
 things_foo_id_fkey FOREIGN KEY (foo_id) REFERENCES foos(id)
 things_bar_id_fkey FOREIGN KEY (bar_id) REFERENCES bars(id)
 things_baz_id_fkey FOREIGN KEY (baz_id) REFERENCES bazs(id)
 Triggers:
 timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW
 EXECUTE PROCEDURE timestamps_tfun()

 Let me know if you need anything else.

 Thanks,


 On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:

 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has
 commanded this server process to roll back the current transaction and
 exit, because another server process exited abnormally and possibly
 corrupted shared memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be
 able to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed
 the query to use string_agg, it worked fine.  I also tried using array_agg
 on a few different queries, all yielding the same issue.  Swapping in
 string_agg fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and when
 I ran the query while viewing htop, the virtual size of the Postgres
 process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
 4.7.2, 64-bit

 Any help would be much appreciated, thanks!

 -Robert







[GENERAL] Memory Issue with array_agg?

2013-08-18 Thread Robert Sosinski
When using array_agg on a large table, memory usage seems to spike up until
Postgres crashes with the following error:

2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection because
of crash of another server process
2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
this server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able to
reconnect to the database and repeat your command.

I've definitely isolated it down to using array_agg, as when I changed the
query to use string_agg, it worked fine.  I also tried using array_agg on a
few different queries, all yielding the same issue.  Swapping in string_agg
fixed the issue once more.

This particular table has over 2 million rows and is 1.2 Gigs, and when I
ran the query while viewing htop, the virtual size of the Postgres process
ballooned to 13.9G until crashing.

The version of Postgres I am using is: PostgreSQL 9.2.4 on
x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
4.7.2, 64-bit

Any help would be much appreciated, thanks!

-Robert


Re: [GENERAL] Memory Issue with array_agg?

2013-08-18 Thread Christoph Moench-Tegeder
## Robert Sosinski (rsosin...@ticketevolution.com):

 When using array_agg on a large table, memory usage seems to spike up until
 Postgres crashes with the following error:

This sounds like bug #7916.
http://www.postgresql.org/message-id/e1uceeu-0004hy...@wrigleys.postgresql.org
As noted in that thread, changing the AllocSet parameters for
acummArrayResult may help (call to AllocSetContextCreate() in
ArrayBuildState *accumArrayResult(), src/backend/utils/adt/arrayfuncs.c,
change ALLOCSET_DEFAULT_*SIZE to ALLOCSET_SMALL_*SIZE).
Also, lowering work_mem may help, depending on your SQL.

Regards,
Christoph

-- 
Spare Space


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-07-03 Thread Daniel Cristian Cruz
I found where the problem is:

http://www.postgresql.org/docs/9.1/static/release-9-1-6.html

It could be nice if something is added to the 9.2 release notes to warn the
admins.


2013/4/24 Adrian Klaver adrian.kla...@gmail.com

 On 04/24/2013 11:20 AM, Daniel Cristian Cruz wrote:

 I've done an explain analyze under the test environment, and there is no
 aggressive memory usage.

 So I dropped the database in the new cluster and restored a fresh dump
 from production (in theory, that's the difference between the two
 environments).

 Some minutes after I got an answer: after a dump / restore, there is no
 problem. The same cluster just a dump/restore.

 Since I had no idea on what is the problem, and the structure from the
 dump of the bad database is equal to the dump from the production (it
 differs only in check constraints where ((turma.situacao)::text = ANY
 ((ARRAY['Aguardando Atualização'::character varying, 'Em
 Andamento'::character varying])::text[]))) became
 ((turma.situacao)::text = ANY (ARRAY[('Aguardando
 Atualização'::character varying)::text, ('Em Andamento'::character
 varying)::text], how can I report a issue in pg_upgrade?


 The bug reporting page is here:

 http://www.postgresql.org/**support/submitbug/http://www.postgresql.org/support/submitbug/



 Thanks, Adrian, Tomas and Tom. It's a specific issue, it's on backup
 until May, 1st. Until there, if someone would like to know something
 about it, just ask me, I would like to help removing an issue.




 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
 adrian.kla...@gmail.com




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-24 Thread Daniel Cristian Cruz
I've done an explain analyze under the test environment, and there is no
aggressive memory usage.

So I dropped the database in the new cluster and restored a fresh dump from
production (in theory, that's the difference between the two environments).

Some minutes after I got an answer: after a dump / restore, there is no
problem. The same cluster just a dump/restore.

Since I had no idea on what is the problem, and the structure from the dump
of the bad database is equal to the dump from the production (it differs
only in check constraints where ((turma.situacao)::text = ANY
((ARRAY['Aguardando Atualização'::character varying, 'Em
Andamento'::character varying])::text[]))) became ((turma.situacao)::text
= ANY (ARRAY[('Aguardando Atualização'::character varying)::text, ('Em
Andamento'::character varying)::text], how can I report a issue in
pg_upgrade?

Thanks, Adrian, Tomas and Tom. It's a specific issue, it's on backup until
May, 1st. Until there, if someone would like to know something about it,
just ask me, I would like to help removing an issue.


2013/4/23 Daniel Cristian Cruz danielcrist...@gmail.com

 2013/4/23 Adrian Klaver adrian.kla...@gmail.com

 On 04/23/2013 04:23 AM, Daniel Cristian Cruz wrote:

 2013/4/22 Daniel Cristian Cruz danielcrist...@gmail.com
 mailto:danielcristian@gmail.**com danielcrist...@gmail.com


 query1:
 EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino,
 ced.evento, ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo,
 ced.id_tipo_evento, ced.tipo_evento, ac.media_referencia, p.nome,
 ef.nome AS nomeEspacoFisico, ( SELECT count ( pre2.presente )  0
 FROM turma.presenca pre2 WHERE pre2.id_aula = ac.id_aula AND
 pre2.id_evento = ac.id_evento AND pre2.id_diario = '64469' ) AS
 presenca, ced.id_aula FROM recurso.consulta_evento_diario ced LEFT
 JOIN recurso.evento e USING ( id_evento ) LEFT JOIN
 recurso.espaco_fisico ef USING ( id_espaco_fisico ) LEFT JOIN
 turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
 recurso.evento_participante ep USING ( id_evento ) LEFT JOIN
 senai.pessoa p USING ( id_pessoa ) WHERE id_diario = '64469' AND
 ced.id_evento NOT IN ( SELECT ec.id_evento_sobreposto FROM
 recurso.evento_conflito ec WHERE ec.id_evento_sobreposto =
 ced.id_evento AND ec.ignorado IS NULL ) AND ced.inicio BETWEEN
 '2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99' ORDER BY
 inicio;

 server 9.1:
 http://explain.depesz.com/s/**fmM http://explain.depesz.com/s/fmM

 server 9.2:
 http://explain.depesz.com/s/**wXm http://explain.depesz.com/s/wXm

 After run this one on server 9.2, RES memory reached 6.5GB, VIRT
 15GB.


 Since there is no response, is this memory usage normal? The same query
 on version 9.1 doesn't use that much memory.


 Not sure how it applies but I noticed that a GroupAggregate in 9.1 that
 took 1.22 secs became a a HashAggregate in the 9.2 query and took 12.54
 secs.


 I used to read a explain and find something, but this one is huge.
 Unfortunately I'm still working on data migration from the 9.2 to 9.1 and
 didn't get time to read it in detail...

 I'm concerned about this because there is just only one report like
 that. Does someone else has the same pattern when using inherited tables?


 Also noticed that in your 9.2 production conf:

 (no constraint_exclusion set)

 Does this mean the default of 'partition' was left as is or that the
 setting was set to 'off'?


 No, default:

 senai=# show constraint_exclusion ;
  constraint_exclusion
 --
  partition
 (1 row)

 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-24 Thread Adrian Klaver

On 04/24/2013 11:20 AM, Daniel Cristian Cruz wrote:

I've done an explain analyze under the test environment, and there is no
aggressive memory usage.

So I dropped the database in the new cluster and restored a fresh dump
from production (in theory, that's the difference between the two
environments).

Some minutes after I got an answer: after a dump / restore, there is no
problem. The same cluster just a dump/restore.

Since I had no idea on what is the problem, and the structure from the
dump of the bad database is equal to the dump from the production (it
differs only in check constraints where ((turma.situacao)::text = ANY
((ARRAY['Aguardando Atualização'::character varying, 'Em
Andamento'::character varying])::text[]))) became
((turma.situacao)::text = ANY (ARRAY[('Aguardando
Atualização'::character varying)::text, ('Em Andamento'::character
varying)::text], how can I report a issue in pg_upgrade?


The bug reporting page is here:

http://www.postgresql.org/support/submitbug/



Thanks, Adrian, Tomas and Tom. It's a specific issue, it's on backup
until May, 1st. Until there, if someone would like to know something
about it, just ask me, I would like to help removing an issue.






--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-23 Thread Daniel Cristian Cruz
2013/4/22 Daniel Cristian Cruz danielcrist...@gmail.com

 query1:
 EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino, ced.evento,
 ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo, ced.id_tipo_evento,
 ced.tipo_evento, ac.media_referencia, p.nome, ef.nome AS nomeEspacoFisico,
 ( SELECT count ( pre2.presente )  0 FROM turma.presenca pre2 WHERE
 pre2.id_aula = ac.id_aula AND pre2.id_evento = ac.id_evento AND
 pre2.id_diario = '64469' ) AS presenca, ced.id_aula FROM
 recurso.consulta_evento_diario ced LEFT JOIN recurso.evento e USING (
 id_evento ) LEFT JOIN recurso.espaco_fisico ef USING ( id_espaco_fisico )
 LEFT JOIN turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
 recurso.evento_participante ep USING ( id_evento ) LEFT JOIN senai.pessoa p
 USING ( id_pessoa ) WHERE id_diario = '64469' AND ced.id_evento NOT IN (
 SELECT ec.id_evento_sobreposto FROM recurso.evento_conflito ec WHERE
 ec.id_evento_sobreposto = ced.id_evento AND ec.ignorado IS NULL ) AND
 ced.inicio BETWEEN '2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99'
 ORDER BY inicio;

 server 9.1:
 http://explain.depesz.com/s/fmM

 server 9.2:
 http://explain.depesz.com/s/wXm

 After run this one on server 9.2, RES memory reached 6.5GB, VIRT 15GB.


Since there is no response, is this memory usage normal? The same query on
version 9.1 doesn't use that much memory.

I'm concerned about this because there is just only one report like that.
Does someone else has the same pattern when using inherited tables?

Just for information, my schema uses one table that is inherited by all
others tables; it is an audit record: creator, creation time, creator
application, updater, update time, updater application, table name and
record id.

Thanks,
-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-23 Thread Adrian Klaver

On 04/23/2013 04:23 AM, Daniel Cristian Cruz wrote:

2013/4/22 Daniel Cristian Cruz danielcrist...@gmail.com
mailto:danielcrist...@gmail.com

query1:
EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino,
ced.evento, ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo,
ced.id_tipo_evento, ced.tipo_evento, ac.media_referencia, p.nome,
ef.nome AS nomeEspacoFisico, ( SELECT count ( pre2.presente )  0
FROM turma.presenca pre2 WHERE pre2.id_aula = ac.id_aula AND
pre2.id_evento = ac.id_evento AND pre2.id_diario = '64469' ) AS
presenca, ced.id_aula FROM recurso.consulta_evento_diario ced LEFT
JOIN recurso.evento e USING ( id_evento ) LEFT JOIN
recurso.espaco_fisico ef USING ( id_espaco_fisico ) LEFT JOIN
turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
recurso.evento_participante ep USING ( id_evento ) LEFT JOIN
senai.pessoa p USING ( id_pessoa ) WHERE id_diario = '64469' AND
ced.id_evento NOT IN ( SELECT ec.id_evento_sobreposto FROM
recurso.evento_conflito ec WHERE ec.id_evento_sobreposto =
ced.id_evento AND ec.ignorado IS NULL ) AND ced.inicio BETWEEN
'2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99' ORDER BY inicio;

server 9.1:
http://explain.depesz.com/s/fmM

server 9.2:
http://explain.depesz.com/s/wXm

After run this one on server 9.2, RES memory reached 6.5GB, VIRT 15GB.


Since there is no response, is this memory usage normal? The same query
on version 9.1 doesn't use that much memory.


Not sure how it applies but I noticed that a GroupAggregate in 9.1 that 
took 1.22 secs became a a HashAggregate in the 9.2 query and took 12.54 
secs.




I'm concerned about this because there is just only one report like
that. Does someone else has the same pattern when using inherited tables?


Also noticed that in your 9.2 production conf:

(no constraint_exclusion set)

Does this mean the default of 'partition' was left as is or that the 
setting was set to 'off'?




Just for information, my schema uses one table that is inherited by all
others tables; it is an audit record: creator, creation time, creator
application, updater, update time, updater application, table name and
record id.

Thanks,
--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-23 Thread Daniel Cristian Cruz
2013/4/23 Adrian Klaver adrian.kla...@gmail.com

 On 04/23/2013 04:23 AM, Daniel Cristian Cruz wrote:

 2013/4/22 Daniel Cristian Cruz danielcrist...@gmail.com
 mailto:danielcristian@gmail.**com danielcrist...@gmail.com


 query1:
 EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino,
 ced.evento, ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo,
 ced.id_tipo_evento, ced.tipo_evento, ac.media_referencia, p.nome,
 ef.nome AS nomeEspacoFisico, ( SELECT count ( pre2.presente )  0
 FROM turma.presenca pre2 WHERE pre2.id_aula = ac.id_aula AND
 pre2.id_evento = ac.id_evento AND pre2.id_diario = '64469' ) AS
 presenca, ced.id_aula FROM recurso.consulta_evento_diario ced LEFT
 JOIN recurso.evento e USING ( id_evento ) LEFT JOIN
 recurso.espaco_fisico ef USING ( id_espaco_fisico ) LEFT JOIN
 turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
 recurso.evento_participante ep USING ( id_evento ) LEFT JOIN
 senai.pessoa p USING ( id_pessoa ) WHERE id_diario = '64469' AND
 ced.id_evento NOT IN ( SELECT ec.id_evento_sobreposto FROM
 recurso.evento_conflito ec WHERE ec.id_evento_sobreposto =
 ced.id_evento AND ec.ignorado IS NULL ) AND ced.inicio BETWEEN
 '2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99' ORDER BY
 inicio;

 server 9.1:
 http://explain.depesz.com/s/**fmM http://explain.depesz.com/s/fmM

 server 9.2:
 http://explain.depesz.com/s/**wXm http://explain.depesz.com/s/wXm

 After run this one on server 9.2, RES memory reached 6.5GB, VIRT 15GB.


 Since there is no response, is this memory usage normal? The same query
 on version 9.1 doesn't use that much memory.


 Not sure how it applies but I noticed that a GroupAggregate in 9.1 that
 took 1.22 secs became a a HashAggregate in the 9.2 query and took 12.54
 secs.


I used to read a explain and find something, but this one is huge.
Unfortunately I'm still working on data migration from the 9.2 to 9.1 and
didn't get time to read it in detail...

I'm concerned about this because there is just only one report like
 that. Does someone else has the same pattern when using inherited tables?


 Also noticed that in your 9.2 production conf:

 (no constraint_exclusion set)

 Does this mean the default of 'partition' was left as is or that the
 setting was set to 'off'?


No, default:

senai=# show constraint_exclusion ;
 constraint_exclusion
--
 partition
(1 row)

-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-22 Thread Daniel Cristian Cruz
I'm running pgBadger over the log, and will get some queries to explain
analyze them.

The 9.2 cluster is running in the same server as the production, so I will
try to compare some critical explains and publish on explain.depesz.com.

2013/4/21 Adrian Klaver adrian.kla...@gmail.com

 On 04/21/2013 09:29 AM, Daniel Cristian Cruz wrote:




 2013/4/21 Adrian Klaver adrian.kla...@gmail.com
 mailto:adrian.klaver@gmail.**com adrian.kla...@gmail.com




 1)
 Major upgrade from 9.1.4 to 9.2.4.
 Used pg_upgrade
 Tested on VM with 9.2.4 and no problems.
 Same machine used for production server 9.1.4 and 9.2.4
 When complex queries where run on production server under 9.2.4
 memory usage climbed out of control.

 Unanswered questions:

 a) Data set sizes between test and production machines, how do they
 differ?


 It's the same on both; we do a dump/restore every day to the development
 / issue team work.


 Which begs the question, what is different about your test setup that
 makes it not act up?


test environment:

virtual server
8 cores
12 GB RAM
4GB SWAP

max_connections = 200
shared_buffers = 800MB
temp_buffers = 32MB
work_mem = 256MB
maintenance_work_mem = 768MB
(no max_stack_depth set)
shared_preload_libraries =
'$libdir/plpgsql,$libdir/plpython2,$libdir/pgxml,$libdir/pg_stat_statements'
wal_level = minimal
checkpoint_segments = 15
archive_mode = off
max_wal_senders = 0
(no effective_cache_size set)
constraint_exclusion = partition
log_min_duration_statement = 1000
(no log_temp_files set)
(no statement_timeout set)
max_locks_per_transaction = 1024

production:

true server
24 cores
96GB RAM
50GB SWAP

max_connections = 1000
shared_buffers = 6GB
temp_buffers = 24MB
work_mem = 24MB
maintenance_work_mem = 128MB
max_stack_depth = 8MB
(no shared_preload_libraries, disabled after problems because
pg_stat_statements was a new module)
wal_level = hot_standby
checkpoint_segments = 20
archive_mode = on
(archive_command set)
max_wal_senders = 1
effective_cache_size = 32GB
(no constraint_exclusion set)
log_min_duration_statement = 5000
log_temp_files = 0
statement_timeout = 30
(no max_locks_per_transaction set)


 We know that the test servers are running on VMs with fewer resources than
 the production server.

 So:

 Are the VMs running the same OS and OS version as the production server?


test: Red Hat Enterprise Linux Server release 5.5, Linux
2.6.18-194.26.1.el5 #1 SMP Fri Oct 29 14:21:16 EDT 2010 x86_64 x86_64
x86_64 GNU/Linux
production: CentOS release 5.5, Linux 2.6.18-194.32.1.el5 #1 SMP Wed Jan 5
17:52:25 EST 2011 x86_64 x86_64 x86_64 GNU/Linux



 What are 'hardware differences' between the test VMs and the physical
 server?


above.



 Are the Postgres configurations different for the test vs production
 servers?


Yes, some of them, shown above.


 I would guess the usage pattern is different, but in what way?
Number of connections/sessions?


300 connections in production, 50 in test.


INSERT/UPDATE/DELETE pattern?


test: just test cases, development cases and issue cases
production: for 1 minute (00:01:00.076343), 583 inserts, 306 updates and 13
deletes and 3300 transactions (xacts_commits)



Client software using the database?


Sites using PHP with and without connection pool and with and without
Doctrine; a huge system with Java and Hibernate using the JBoss pooler,
Java aplication is the main user.






 b) What are the EXPLAIN/ANALYZE results for a query on 9.1.4, 9.2.4
 test and 9.2.4 production?
 Since there is no single query causing  the problem, I don't know if it
 could help


 For a lack of anything else pick one and try it on the various servers to
 see if something stands out.


Yes , here it is:

query1:
EXPLAIN ANALYZE SELECT ced.id_evento, ced.inicio, ced.termino, ced.evento,
ced.id_eventos IS NOT NULL AS aula_dividida, ac.titulo, ced.id_tipo_evento,
ced.tipo_evento, ac.media_referencia, p.nome, ef.nome AS nomeEspacoFisico,
( SELECT count ( pre2.presente )  0 FROM turma.presenca pre2 WHERE
pre2.id_aula = ac.id_aula AND pre2.id_evento = ac.id_evento AND
pre2.id_diario = '64469' ) AS presenca, ced.id_aula FROM
recurso.consulta_evento_diario ced LEFT JOIN recurso.evento e USING (
id_evento ) LEFT JOIN recurso.espaco_fisico ef USING ( id_espaco_fisico )
LEFT JOIN turma.aula_calendario ac USING ( id_aula, id_evento ) LEFT JOIN
recurso.evento_participante ep USING ( id_evento ) LEFT JOIN senai.pessoa p
USING ( id_pessoa ) WHERE id_diario = '64469' AND ced.id_evento NOT IN (
SELECT ec.id_evento_sobreposto FROM recurso.evento_conflito ec WHERE
ec.id_evento_sobreposto = ced.id_evento AND ec.ignorado IS NULL ) AND
ced.inicio BETWEEN '2013-04-14 00:00:00' AND '2013-04-20 23:59:59.99'
ORDER BY inicio;

server 9.1:
http://explain.depesz.com/s/fmM

server 9.2:
http://explain.depesz.com/s/wXm

After run this one on server 9.2, RES memory reached 6.5GB, VIRT 15GB.

query 2:
EXPLAIN ANALYZE SELECT count ( 

Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Tomas Vondra
Hi,

we got a report of (probably) the same issue on a local mailing list.
Maybe it'll help in finding the root cause, so I'm resending the info
here too.

On 21.4.2013 01:19, Adrian Klaver wrote:
 On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:
 I think I didn't make it clear: the session memory usage is growing up
 too fast, until all server memory got used and swap occurs.

 Never saw something like that. The version is under a test enviroment
 for a long time...

 Thanks if someone could help me.
 
 Before any one can help I would think more information is needed;
 
 1) Is it on the same machine/OS as the old version?

Yes. This was an upgrade from 9.2.3 to 9.2.4, so this was the usual
minor upgrade procedure - stop, install 9.2.4 package, start.

AFAIK there was no other change (e.g. update of kernel).

 2) What are the hardware specs for the machine?

32GB of RAM, 6 cores. I don't know which linux distribution they run.

The interesting part is they have a lot of tables due to a partitioned
schema. In total there's ~9500 tables.

 3) Is it still in test mode or in production?

It's in production for a long time and so far it was running fine, until
the upgrade to 9.2.4.

 4) You seem to imply that in test mode everything worked alright, is
 that the case?

It was working fine in the production (exactly the same workload) for a
long time (few months at least).

 5) In either case, test/production, what is being done in the session(s)?

Simple selects, mostly index scans, nothing complex or time consuming.

There's not a particular query that crashes, it's rather about a
combination of queries.

 6) Is there anything in the Postgres logs that might shed light?

I do have a log with the memory context info printed after the OOM
killed the session - see it attached.

Tomas


crash.log.gz
Description: application/gzip

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 I do have a log with the memory context info printed after the OOM
 killed the session - see it attached.

The only thing that seems rather bloated is the CacheMemoryContext,
which seems to be because the backend has cached info about several
thousand tables and indexes.  Given that you say there's 9500 relations
in their schema, it's hard to believe that 9.2.4 is suddenly doing that
where 9.2.3 didn't.  I'm wondering if they've done something else that
restricted the amount of memory available to a backend.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Daniel Cristian Cruz
2013/4/21 Tom Lane t...@sss.pgh.pa.us

 Tomas Vondra t...@fuzzy.cz writes:
  I do have a log with the memory context info printed after the OOM
  killed the session - see it attached.

 The only thing that seems rather bloated is the CacheMemoryContext,
 which seems to be because the backend has cached info about several
 thousand tables and indexes.  Given that you say there's 9500 relations
 in their schema, it's hard to believe that 9.2.4 is suddenly doing that
 where 9.2.3 didn't.  I'm wondering if they've done something else that
 restricted the amount of memory available to a backend.


Maybe, since I'm running the same server and top shows a RES size a bit
large for idle sessions. Not so large than 9.2. Bellow is the actual server
top.

top - 10:30:35 up 385 days, 14:35,  1 user,  load average: 1.48, 1.32, 1.28
Tasks: 668 total,   5 running, 663 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.3%us,  0.7%sy,  0.0%ni, 87.6%id,  1.3%wa,  0.0%hi,  0.1%si,
 0.0%st
Mem:  98923768k total, 95618640k used,  3305128k free,   232888k buffers
Swap: 49150856k total,   264284k used, 48886572k free, 91567048k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND




32497 pg91  15   0 6462m 3.5g 3.5g S  0.0  3.7   0:13.13 postgres:
writer process



10988 pg91  15   0 6475m 1.7g 1.7g S  0.0  1.9   0:40.74 postgres:
integracao senai 10.1.3.200(57290) idle



18518 pg91  18   0 6475m 1.7g 1.7g S  0.0  1.9   0:36.38 postgres:
integracao senai 10.1.3.200(51766) idle



23965 pg91  25   0 6528m 1.3g 1.2g S  0.0  1.3   1:09.19 postgres: sgn2
senai 10.1.3.1(8353) idle



30504 pg91  15   0 6700m 1.2g 1.0g S  0.0  1.3   0:17.64 postgres: sgn2
senai 10.1.3.1(20575) idle



 1399 pg91  17   0 6515m 1.2g 1.2g S  0.0  1.3   0:56.62 postgres: sgn2
senai 10.1.3.1(52594) idle



 5732 pg91  15   0 6521m 1.1g 1.1g S  0.0  1.2   0:33.35 postgres: sgn2
senai 10.1.3.1(57789) idle



 8223 pg91  15   0 6520m 1.1g 1.1g S  0.0  1.2   0:22.02 postgres: sgn2
senai 10.1.3.1(39002) idle



 7244 pg91  16   0 6527m 1.1g 1.0g R  6.1  1.2   0:14.65 postgres: sgn2
senai 10.1.3.1(58921) SELECT



 7916 pg91  15   0 6527m 1.1g 1.0g S  0.0  1.1   0:32.47 postgres: sgn2
senai 10.1.3.1(38869) idle



29701 pg91  15   0 6517m 1.0g 1.0g S  0.0  1.1   0:08.02 postgres: sgn2
senai 10.1.3.1(47910) idle



17445 pg91  17   0 6519m 1.0g 1.0g S  0.0  1.1   0:06.75 postgres: sgn2
senai 10.1.3.1(10035) idle



 1398 pg91  17   0 6513m 1.0g 951m S  0.0  1.0   1:46.55 postgres: sgn2
senai 10.1.3.1(26616) idle



30155 pg91  15   0 6496m 825m 792m S  0.0  0.9   0:16.80 postgres: sgn2
senai 10.1.3.1(20472) idle



 8225 pg91  15   0 6511m 743m 696m S  0.0  0.8   0:17.39 postgres: sgn2
senai 10.1.3.1(59397) idle



30156 pg91  15   0 6492m 712m 683m S  0.0  0.7   0:18.27 postgres: sgn2
senai 10.1.3.1(48402) idle



29471 pg91  15   0 6514m 559m 508m S  0.0  0.6   0:11.30 postgres: sgn2
senai 10.1.3.1(47784) idle



 8314 pg91  15   0 6695m 447m 225m S  0.0  0.5   0:14.44 postgres: sgn2
senai 10.1.3.1(59469) idle



13807 pg91  15   0 6492m 399m 369m S  0.0  0.4   2:00.32 postgres: sgn2
senai 10.1.3.1(31021) idle



18621 pg91  17   0 6554m 370m 294m R 44.4  0.4   0:01.17 postgres: sgn2
senai 10.1.3.1(61676) SELECT



15904 pg91  17   0 6507m 316m 273m R 66.4  0.3   0:04.10 postgres: sgn2
senai 10.1.3.1(7043) SELECT



28055 pg91  18   0 6493m 236m 203m S  0.0  0.2   0:02.72 postgres: sgn2
senai 10.1.3.1(14902) idle



 8313 pg91  15   0 6684m 228m  85m S  0.0  0.2   0:00.59 postgres: sgn2
senai 10.1.3.1(59466) idle



30503 pg91  15   0 6682m 212m  71m S  0.0  0.2   0:00.43 postgres: sgn2
senai 10.1.3.1(48499) idle




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Adrian Klaver

On 04/20/2013 05:19 PM, Daniel Cristian Cruz wrote:

Copying to list to fill in blanks.





2013/4/20 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com

On 04/20/2013 04:30 PM, Daniel Cristian Cruz wrote:




2013/4/20 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com
mailto:adrian.klaver@gmail.__com mailto:adrian.kla...@gmail.com


 On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:

 I think I didn't make it clear: the session memory usage is
 growing up
 too fast, until all server memory got used and swap occurs.

 Never saw something like that. The version is under a test
 enviroment
 for a long time...

 Thanks if someone could help me.


 Before any one can help I would think more information is
needed;

 1) Is it on the same machine/OS as the old version?


Yes same machine, CentOS 5.5, just upgraded PostgreSQL only


How did you upgrade?
 pg_upgrade
 dump/restore


pg_upgrade




 4) You seem to imply that in test mode everything worked
alright, is
 that the case?


Yes, got two servers, never got the same issue


So what difference is there between the test and production servers?


A real server, two VMs with very less memory and CPUs





 5) In either case, test/production, what is being done in the
 session(s)?


Some complex queries. Some very complex queries...


Any chance to see an EXPLAIN ANALYZE for query on test machine vs
production?


Right now I'm building a tool to import the loose records from the new
version to the old, since I'm doing a rollback over the upgrade. As soon
I finish it, I could debug what is happening in the cluster.









 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
adrian.kla...@gmail.com mailto:adrian.kla...@gmail.com
mailto:adrian.klaver@gmail.__com mailto:adrian.kla...@gmail.com





--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com mailto:adrian.kla...@gmail.com




--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Adrian Klaver

On 04/21/2013 06:37 AM, Daniel Cristian Cruz wrote:

2013/4/21 Tom Lane t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us

Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz writes:
  I do have a log with the memory context info printed after the OOM
  killed the session - see it attached.

The only thing that seems rather bloated is the CacheMemoryContext,
which seems to be because the backend has cached info about several
thousand tables and indexes.  Given that you say there's 9500 relations
in their schema, it's hard to believe that 9.2.4 is suddenly doing that
where 9.2.3 didn't.  I'm wondering if they've done something else that
restricted the amount of memory available to a backend.


Maybe, since I'm running the same server and top shows a RES size a bit
large for idle sessions. Not so large than 9.2. Bellow is the actual
server top.


Just to be clear the below is for the 9.1.4 server you rolled backed to?



top - 10:30:35 up 385 days, 14:35,  1 user,  load average: 1.48, 1.32, 1.28
Tasks: 668 total,   5 running, 663 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.3%us,  0.7%sy,  0.0%ni, 87.6%id,  1.3%wa,  0.0%hi,  0.1%si,
  0.0%st
Mem:  98923768k total, 95618640k used,  3305128k free,   232888k buffers
Swap: 49150856k total,   264284k used, 48886572k free, 91567048k cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
32497 pg91  15   0 6462m 3.5g 3.5g S  0.0  3.7   0:13.13 postgres:
writer process
10988 pg91  15   0 6475m 1.7g 1.7g S  0.0  1.9   0:40.74 postgres:
integracao senai 10.1.3.200(57290) idle
18518 pg91  18   0 6475m 1.7g 1.7g S  0.0  1.9   0:36.38 postgres:
integracao senai 10.1.3.200(51766) idle
23965 pg91  25   0 6528m 1.3g 1.2g S  0.0  1.3   1:09.19 postgres:
sgn2 senai 10.1.3.1(8353) idle
30504 pg91  15   0 6700m 1.2g 1.0g S  0.0  1.3   0:17.64 postgres:
sgn2 senai 10.1.3.1(20575) idle
  1399 pg91  17   0 6515m 1.2g 1.2g S  0.0  1.3   0:56.62 postgres:
sgn2 senai 10.1.3.1(52594) idle
  5732 pg91  15   0 6521m 1.1g 1.1g S  0.0  1.2   0:33.35 postgres:
sgn2 senai 10.1.3.1(57789) idle
  8223 pg91  15   0 6520m 1.1g 1.1g S  0.0  1.2   0:22.02 postgres:
sgn2 senai 10.1.3.1(39002) idle
  7244 pg91  16   0 6527m 1.1g 1.0g R  6.1  1.2   0:14.65 postgres:
sgn2 senai 10.1.3.1(58921) SELECT
  7916 pg91  15   0 6527m 1.1g 1.0g S  0.0  1.1   0:32.47 postgres:
sgn2 senai 10.1.3.1(38869) idle
29701 pg91  15   0 6517m 1.0g 1.0g S  0.0  1.1   0:08.02 postgres:
sgn2 senai 10.1.3.1(47910) idle
17445 pg91  17   0 6519m 1.0g 1.0g S  0.0  1.1   0:06.75 postgres:
sgn2 senai 10.1.3.1(10035) idle
  1398 pg91  17   0 6513m 1.0g 951m S  0.0  1.0   1:46.55 postgres:
sgn2 senai 10.1.3.1(26616) idle
30155 pg91  15   0 6496m 825m 792m S  0.0  0.9   0:16.80 postgres:
sgn2 senai 10.1.3.1(20472) idle
  8225 pg91  15   0 6511m 743m 696m S  0.0  0.8   0:17.39 postgres:
sgn2 senai 10.1.3.1(59397) idle
30156 pg91  15   0 6492m 712m 683m S  0.0  0.7   0:18.27 postgres:
sgn2 senai 10.1.3.1(48402) idle
29471 pg91  15   0 6514m 559m 508m S  0.0  0.6   0:11.30 postgres:
sgn2 senai 10.1.3.1(47784) idle
  8314 pg91  15   0 6695m 447m 225m S  0.0  0.5   0:14.44 postgres:
sgn2 senai 10.1.3.1(59469) idle
13807 pg91  15   0 6492m 399m 369m S  0.0  0.4   2:00.32 postgres:
sgn2 senai 10.1.3.1(31021) idle
18621 pg91  17   0 6554m 370m 294m R 44.4  0.4   0:01.17 postgres:
sgn2 senai 10.1.3.1(61676) SELECT
15904 pg91  17   0 6507m 316m 273m R 66.4  0.3   0:04.10 postgres:
sgn2 senai 10.1.3.1(7043) SELECT
28055 pg91  18   0 6493m 236m 203m S  0.0  0.2   0:02.72 postgres:
sgn2 senai 10.1.3.1(14902) idle
  8313 pg91  15   0 6684m 228m  85m S  0.0  0.2   0:00.59 postgres:
sgn2 senai 10.1.3.1(59466) idle
30503 pg91  15   0 6682m 212m  71m S  0.0  0.2   0:00.43 postgres:
sgn2 senai 10.1.3.1(48499) idle

--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Tomas Vondra
On 21.4.2013 15:14, Tom Lane wrote:
 Tomas Vondra t...@fuzzy.cz writes:
 I do have a log with the memory context info printed after the OOM
 killed the session - see it attached.
 
 The only thing that seems rather bloated is the CacheMemoryContext,
 which seems to be because the backend has cached info about several
 thousand tables and indexes.  Given that you say there's 9500 relations
 in their schema, it's hard to believe that 9.2.4 is suddenly doing that
 where 9.2.3 didn't.  I'm wondering if they've done something else that
 restricted the amount of memory available to a backend.

My thoughts, exactly. I can't really compare the CacheMemoryContext to
the 9.1.3, as I have no data from that version. So maybe it really did
not change, but something else obviously did.

I'm not aware of any other changes, but I'll verify that.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Daniel Cristian Cruz
2013/4/21 Adrian Klaver adrian.kla...@gmail.com

 On 04/21/2013 06:37 AM, Daniel Cristian Cruz wrote:

 2013/4/21 Tom Lane t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us


 Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz writes:
   I do have a log with the memory context info printed after the OOM
   killed the session - see it attached.

 The only thing that seems rather bloated is the CacheMemoryContext,
 which seems to be because the backend has cached info about several
 thousand tables and indexes.  Given that you say there's 9500
 relations
 in their schema, it's hard to believe that 9.2.4 is suddenly doing
 that
 where 9.2.3 didn't.  I'm wondering if they've done something else that
 restricted the amount of memory available to a backend.


 Maybe, since I'm running the same server and top shows a RES size a bit
 large for idle sessions. Not so large than 9.2. Bellow is the actual
 server top.


 Just to be clear the below is for the 9.1.4 server you rolled backed to?


Yes.





 top - 10:30:35 up 385 days, 14:35,  1 user,  load average: 1.48, 1.32,
 1.28
 Tasks: 668 total,   5 running, 663 sleeping,   0 stopped,   0 zombie
 Cpu(s): 10.3%us,  0.7%sy,  0.0%ni, 87.6%id,  1.3%wa,  0.0%hi,  0.1%si,
   0.0%st
 Mem:  98923768k total, 95618640k used,  3305128k free,   232888k buffers
 Swap: 49150856k total,   264284k used, 48886572k free, 91567048k cached

PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 32497 pg91  15   0 6462m 3.5g 3.5g S  0.0  3.7   0:13.13 postgres:
 writer process
 10988 pg91  15   0 6475m 1.7g 1.7g S  0.0  1.9   0:40.74 postgres:
 integracao senai 10.1.3.200(57290) idle
 18518 pg91  18   0 6475m 1.7g 1.7g S  0.0  1.9   0:36.38 postgres:
 integracao senai 10.1.3.200(51766) idle
 23965 pg91  25   0 6528m 1.3g 1.2g S  0.0  1.3   1:09.19 postgres:
 sgn2 senai 10.1.3.1(8353) idle
 30504 pg91  15   0 6700m 1.2g 1.0g S  0.0  1.3   0:17.64 postgres:
 sgn2 senai 10.1.3.1(20575) idle
   1399 pg91  17   0 6515m 1.2g 1.2g S  0.0  1.3   0:56.62 postgres:
 sgn2 senai 10.1.3.1(52594) idle
   5732 pg91  15   0 6521m 1.1g 1.1g S  0.0  1.2   0:33.35 postgres:
 sgn2 senai 10.1.3.1(57789) idle
   8223 pg91  15   0 6520m 1.1g 1.1g S  0.0  1.2   0:22.02 postgres:
 sgn2 senai 10.1.3.1(39002) idle
   7244 pg91  16   0 6527m 1.1g 1.0g R  6.1  1.2   0:14.65 postgres:
 sgn2 senai 10.1.3.1(58921) SELECT
   7916 pg91  15   0 6527m 1.1g 1.0g S  0.0  1.1   0:32.47 postgres:
 sgn2 senai 10.1.3.1(38869) idle
 29701 pg91  15   0 6517m 1.0g 1.0g S  0.0  1.1   0:08.02 postgres:
 sgn2 senai 10.1.3.1(47910) idle
 17445 pg91  17   0 6519m 1.0g 1.0g S  0.0  1.1   0:06.75 postgres:
 sgn2 senai 10.1.3.1(10035) idle
   1398 pg91  17   0 6513m 1.0g 951m S  0.0  1.0   1:46.55 postgres:
 sgn2 senai 10.1.3.1(26616) idle
 30155 pg91  15   0 6496m 825m 792m S  0.0  0.9   0:16.80 postgres:
 sgn2 senai 10.1.3.1(20472) idle
   8225 pg91  15   0 6511m 743m 696m S  0.0  0.8   0:17.39 postgres:
 sgn2 senai 10.1.3.1(59397) idle
 30156 pg91  15   0 6492m 712m 683m S  0.0  0.7   0:18.27 postgres:
 sgn2 senai 10.1.3.1(48402) idle
 29471 pg91  15   0 6514m 559m 508m S  0.0  0.6   0:11.30 postgres:
 sgn2 senai 10.1.3.1(47784) idle
   8314 pg91  15   0 6695m 447m 225m S  0.0  0.5   0:14.44 postgres:
 sgn2 senai 10.1.3.1(59469) idle
 13807 pg91  15   0 6492m 399m 369m S  0.0  0.4   2:00.32 postgres:
 sgn2 senai 10.1.3.1(31021) idle
 18621 pg91  17   0 6554m 370m 294m R 44.4  0.4   0:01.17 postgres:
 sgn2 senai 10.1.3.1(61676) SELECT
 15904 pg91  17   0 6507m 316m 273m R 66.4  0.3   0:04.10 postgres:
 sgn2 senai 10.1.3.1(7043) SELECT
 28055 pg91  18   0 6493m 236m 203m S  0.0  0.2   0:02.72 postgres:
 sgn2 senai 10.1.3.1(14902) idle
   8313 pg91  15   0 6684m 228m  85m S  0.0  0.2   0:00.59 postgres:
 sgn2 senai 10.1.3.1(59466) idle
 30503 pg91  15   0 6682m 212m  71m S  0.0  0.2   0:00.43 postgres:
 sgn2 senai 10.1.3.1(48499) idle

 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
 adrian.kla...@gmail.com



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Adrian Klaver

On 04/21/2013 07:50 AM, Daniel Cristian Cruz wrote:




2013/4/21 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com

On 04/21/2013 06:37 AM, Daniel Cristian Cruz wrote:

2013/4/21 Tom Lane t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us
mailto:t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us


 Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz
mailto:t...@fuzzy.cz mailto:t...@fuzzy.cz writes:
   I do have a log with the memory context info printed
after the OOM
   killed the session - see it attached.

 The only thing that seems rather bloated is the
CacheMemoryContext,
 which seems to be because the backend has cached info about
several
 thousand tables and indexes.  Given that you say there's
9500 relations
 in their schema, it's hard to believe that 9.2.4 is
suddenly doing that
 where 9.2.3 didn't.  I'm wondering if they've done
something else that
 restricted the amount of memory available to a backend.


Maybe, since I'm running the same server and top shows a RES
size a bit
large for idle sessions. Not so large than 9.2. Bellow is the actual
server top.


Just to be clear the below is for the 9.1.4 server you rolled backed to?


Yes.


To recap for those following along, there are two different cases in 
play here.


1)
Major upgrade from 9.1.4 to 9.2.4.
Used pg_upgrade
Tested on VM with 9.2.4 and no problems.
Same machine used for production server 9.1.4 and 9.2.4
When complex queries where run on production server under 9.2.4 memory 
usage climbed out of control.


Unanswered questions:

a) Data set sizes between test and production machines, how do they differ?

b) What are the EXPLAIN/ANALYZE results for a query on 9.1.4, 9.2.4 test 
and 9.2.4 production?


2)
Minor upgrade 9.2.3 to 9.2.4
No test server, went production to production.
Same machine.
When some combination of queries where run on 9.2.4 memory usage climbed 
out of control.







--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル




--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Daniel Cristian Cruz
2013/4/21 Adrian Klaver adrian.kla...@gmail.com

 On 04/21/2013 07:50 AM, Daniel Cristian Cruz wrote:




 2013/4/21 Adrian Klaver adrian.kla...@gmail.com
 mailto:adrian.klaver@gmail.**com adrian.kla...@gmail.com


 On 04/21/2013 06:37 AM, Daniel Cristian Cruz wrote:

 2013/4/21 Tom Lane t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us
 mailto:t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us



  Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz
 mailto:t...@fuzzy.cz mailto:t...@fuzzy.cz writes:
I do have a log with the memory context info printed
 after the OOM
killed the session - see it attached.

  The only thing that seems rather bloated is the
 CacheMemoryContext,
  which seems to be because the backend has cached info about
 several
  thousand tables and indexes.  Given that you say there's
 9500 relations
  in their schema, it's hard to believe that 9.2.4 is
 suddenly doing that
  where 9.2.3 didn't.  I'm wondering if they've done
 something else that
  restricted the amount of memory available to a backend.


 Maybe, since I'm running the same server and top shows a RES
 size a bit
 large for idle sessions. Not so large than 9.2. Bellow is the
 actual
 server top.


 Just to be clear the below is for the 9.1.4 server you rolled backed
 to?


 Yes.


 To recap for those following along, there are two different cases in play
 here.

 1)
 Major upgrade from 9.1.4 to 9.2.4.
 Used pg_upgrade
 Tested on VM with 9.2.4 and no problems.
 Same machine used for production server 9.1.4 and 9.2.4
 When complex queries where run on production server under 9.2.4 memory
 usage climbed out of control.

 Unanswered questions:

 a) Data set sizes between test and production machines, how do they differ?


It's the same on both; we do a dump/restore every day to the development /
issue team work.



 b) What are the EXPLAIN/ANALYZE results for a query on 9.1.4, 9.2.4 test
 and 9.2.4 production?


Since there is no single query causing  the problem, I don't know if it
could help.

One thing I didn't mention: I build a record table used to track every
transaction on database, and all tables are inherited from it. This way I
easily found the records to transfer from 9.2.4 to 9.1.x.

But it's something similar with the other case, where they are using for
partitioning purposes, and I'm using to simplify the model (the inherited
table is hidden in the model).

Thanks,
-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Fwd: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Daniel Cristian Cruz
Sorry, I answered to Tomas only...

-- Forwarded message --
From: Daniel Cristian Cruz danielcrist...@gmail.com
Date: 2013/4/21
Subject: Re: [GENERAL] Memory usage after upgrade to 9.2.4
To: Tomas Vondra t...@fuzzy.cz


I had the same environment, almost:


2013/4/21 Tomas Vondra t...@fuzzy.cz

  2) What are the hardware specs for the machine?


 32GB of RAM, 6 cores. I don't know which linux distribution they run.

 The interesting part is they have a lot of tables due to a partitioned
 schema. In total there's ~9500 tables.


We had many tables, not that many, 743 right now.



  3) Is it still in test mode or in production?

 It's in production for a long time and so far it was running fine, until
 the upgrade to 9.2.4.


Same here.



  4) You seem to imply that in test mode everything worked alright, is
  that the case?

 It was working fine in the production (exactly the same workload) for a
 long time (few months at least).


Production is working on 9.1.4; Test environments are on 9.2.4 for some
time (they use a dump from production, updated daily or at request)



  5) In either case, test/production, what is being done in the session(s)?

 Simple selects, mostly index scans, nothing complex or time consuming.

 There's not a particular query that crashes, it's rather about a
 combination of queries.


I can say that there is mostly simple queries, but there is more complex
queries showing in the log.



  6) Is there anything in the Postgres logs that might shed light?

 I do have a log with the memory context info printed after the OOM
 killed the session - see it attached.


I didn't let the OOM killer works, since I was the session killer...

Thanks Tomas, at least I'm not so alone now...

-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Fwd: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Daniel Cristian Cruz
And this one only to Adrian.

Sorry to all.

-- Forwarded message --
From: Daniel Cristian Cruz danielcrist...@gmail.com
Date: 2013/4/20
Subject: Re: [GENERAL] Memory usage after upgrade to 9.2.4
To: Adrian Klaver adrian.kla...@gmail.com





2013/4/20 Adrian Klaver adrian.kla...@gmail.com

 On 04/20/2013 04:30 PM, Daniel Cristian Cruz wrote:




 2013/4/20 Adrian Klaver adrian.kla...@gmail.com
 mailto:adrian.klaver@gmail.**com adrian.kla...@gmail.com


 On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:

 I think I didn't make it clear: the session memory usage is
 growing up
 too fast, until all server memory got used and swap occurs.

 Never saw something like that. The version is under a test
 enviroment
 for a long time...

 Thanks if someone could help me.


 Before any one can help I would think more information is needed;

 1) Is it on the same machine/OS as the old version?


 Yes same machine, CentOS 5.5, just upgraded PostgreSQL only


 How did you upgrade?
 pg_upgrade
 dump/restore


pg_upgrade





 4) You seem to imply that in test mode everything worked alright, is
 that the case?


 Yes, got two servers, never got the same issue


 So what difference is there between the test and production servers?


A real server, two VMs with very less memory and CPUs





 5) In either case, test/production, what is being done in the
 session(s)?


 Some complex queries. Some very complex queries...


 Any chance to see an EXPLAIN ANALYZE for query on test machine vs
 production?


Right now I'm building a tool to import the loose records from the new
version to the old, since I'm doing a rollback over the upgrade. As soon I
finish it, I could debug what is happening in the cluster.









 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
 adrian.kla...@gmail.com 
 mailto:adrian.klaver@gmail.**comadrian.kla...@gmail.com
 





 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
 adrian.kla...@gmail.com




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-21 Thread Adrian Klaver

On 04/21/2013 09:29 AM, Daniel Cristian Cruz wrote:




2013/4/21 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com





1)
Major upgrade from 9.1.4 to 9.2.4.
Used pg_upgrade
Tested on VM with 9.2.4 and no problems.
Same machine used for production server 9.1.4 and 9.2.4
When complex queries where run on production server under 9.2.4
memory usage climbed out of control.

Unanswered questions:

a) Data set sizes between test and production machines, how do they
differ?


It's the same on both; we do a dump/restore every day to the development
/ issue team work.


Which begs the question, what is different about your test setup that 
makes it not act up?


We know that the test servers are running on VMs with fewer resources 
than the production server.


So:

Are the VMs running the same OS and OS version as the production server?

What are 'hardware differences' between the test VMs and the physical 
server?


Are the Postgres configurations different for the test vs production 
servers?


I would guess the usage pattern is different, but in what way?
   Number of connections/sessions?
   INSERT/UPDATE/DELETE pattern?
   Client software using the database?





b) What are the EXPLAIN/ANALYZE results for a query on 9.1.4, 9.2.4
test and 9.2.4 production?
Since there is no single query causing  the problem, I don't know if it could 
help


For a lack of anything else pick one and try it on the various servers 
to see if something stands out.







Thanks,
--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory usage after upgrade to 9.2.4

2013-04-20 Thread Daniel Cristian Cruz
Hi all,

I've upgrade from 9.1.4 to 9.2.4, and got some very weird issue.

My server got connections with RES (column from top utility) with too much
memory:

top - 19:50:58 up 384 days, 23:55,  2 users,  load average: 4.28, 6.51, 7.68
Tasks: 417 total,   1 running, 416 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.5%us,  0.4%sy,  0.0%ni, 94.8%id,  4.3%wa,  0.0%hi,  0.0%si,
 0.0%st
Mem:  98923768k total, 53750228k used, 45173540k free,46192k buffers
Swap: 49150856k total, 24147924k used, 25002932k free, 32706740k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND


10522 pg92  17   0 16.7g 7.9g 1.7g S  0.0  8.4   2:44.69 postgres: sgn2
senai 10.1.3.1(24675) idle

 9367 pg92  17   0 16.6g 6.7g 517m S  0.0  7.1   5:05.98 postgres: sgn2
senai 10.1.3.1(44277) idle

13336 pg92  17   0 16.5g 6.6g 511m S  0.0  7.0   0:14.01 postgres: sgn2
senai 10.1.3.1(51436) idle

26864 pg92  18   0 6487m 6.0g 6.0g S  0.0  6.3   3:20.21 postgres: pg92
senai [local] idle

14360 pg92  15   0 6461m 1.2g 1.2g S  0.0  1.3   0:02.46 postgres:
integracao senai 10.1.3.200(59197) idle

14678 pg92  15   0 6461m 1.2g 1.2g S  0.0  1.3   0:01.34 postgres:
integracao senai 10.1.3.200(59201) idle

10597 pg92  17   0 6513m 878m 830m S  0.0  0.9   0:08.81 postgres:
integracao senai 10.1.3.200(38519) idle

 9301 pg92  15   0 6456m 188m 187m S  0.0  0.2   0:08.98
/home/pg92/bin/postgres

 9408 pg92  17   0 6483m 115m  91m S  0.0  0.1   3:02.13 postgres: sgn2
senai 10.1.3.1(44310) idle

 9306 pg92  15   0 6459m  72m  70m S  0.0  0.1   0:00.93 postgres:
writer process

 9305 pg92  16   0 6459m  60m  59m S  0.0  0.1   0:00.13 postgres:
checkpointer process

15028 pg92  16   0 6462m  54m  51m S  0.0  0.1   0:00.07 postgres:
integracao senai 10.1.3.200(59203) idle

17779 pg92  15   0 6464m  23m  19m S  0.0  0.0   0:00.30 postgres: sgn2
senai 10.1.3.1(34656) idle

 9307 pg92  15   0 6459m 6020 5392 S  0.0  0.0   0:00.01 postgres: wal
writer process

 9308 pg92  15   0 6462m 4584 1740 S  0.0  0.0   0:04.94 postgres:
autovacuum launcher process

17766 pg92  16   0 6460m 3968 2340 S  0.0  0.0   0:00.00 postgres: sgn2
senai 10.1.3.1(34642) idle

17767 pg92  16   0 6460m 3968 2340 S  0.0  0.0   0:00.00 postgres: sgn2
senai 10.1.3.1(62906) idle

(many other conns)

I don't know why this is happening. I'm stuck with all the same parameters
from 9.1.4.

The only solution right now is to rollback to the older version.

Does someone got something like this before?

Thanks,
-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-20 Thread Daniel Cristian Cruz
I think I didn't make it clear: the session memory usage is growing up too
fast, until all server memory got used and swap occurs.

Never saw something like that. The version is under a test enviroment for a
long time...

Thanks if someone could help me.


2013/4/20 Daniel Cristian Cruz danielcrist...@gmail.com

 Hi all,

 I've upgrade from 9.1.4 to 9.2.4, and got some very weird issue.

 My server got connections with RES (column from top utility) with too much
 memory:

 top - 19:50:58 up 384 days, 23:55,  2 users,  load average: 4.28, 6.51,
 7.68
 Tasks: 417 total,   1 running, 416 sleeping,   0 stopped,   0 zombie
 Cpu(s):  0.5%us,  0.4%sy,  0.0%ni, 94.8%id,  4.3%wa,  0.0%hi,  0.0%si,
  0.0%st
 Mem:  98923768k total, 53750228k used, 45173540k free,46192k buffers
 Swap: 49150856k total, 24147924k used, 25002932k free, 32706740k cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND


 10522 pg92  17   0 16.7g 7.9g 1.7g S  0.0  8.4   2:44.69 postgres:
 sgn2 senai 10.1.3.1(24675) idle

  9367 pg92  17   0 16.6g 6.7g 517m S  0.0  7.1   5:05.98 postgres:
 sgn2 senai 10.1.3.1(44277) idle

 13336 pg92  17   0 16.5g 6.6g 511m S  0.0  7.0   0:14.01 postgres:
 sgn2 senai 10.1.3.1(51436) idle

 26864 pg92  18   0 6487m 6.0g 6.0g S  0.0  6.3   3:20.21 postgres:
 pg92 senai [local] idle

 14360 pg92  15   0 6461m 1.2g 1.2g S  0.0  1.3   0:02.46 postgres:
 integracao senai 10.1.3.200(59197) idle

 14678 pg92  15   0 6461m 1.2g 1.2g S  0.0  1.3   0:01.34 postgres:
 integracao senai 10.1.3.200(59201) idle

 10597 pg92  17   0 6513m 878m 830m S  0.0  0.9   0:08.81 postgres:
 integracao senai 10.1.3.200(38519) idle

  9301 pg92  15   0 6456m 188m 187m S  0.0  0.2   0:08.98
 /home/pg92/bin/postgres

  9408 pg92  17   0 6483m 115m  91m S  0.0  0.1   3:02.13 postgres:
 sgn2 senai 10.1.3.1(44310) idle

  9306 pg92  15   0 6459m  72m  70m S  0.0  0.1   0:00.93 postgres:
 writer process

  9305 pg92  16   0 6459m  60m  59m S  0.0  0.1   0:00.13 postgres:
 checkpointer process

 15028 pg92  16   0 6462m  54m  51m S  0.0  0.1   0:00.07 postgres:
 integracao senai 10.1.3.200(59203) idle

 17779 pg92  15   0 6464m  23m  19m S  0.0  0.0   0:00.30 postgres:
 sgn2 senai 10.1.3.1(34656) idle

  9307 pg92  15   0 6459m 6020 5392 S  0.0  0.0   0:00.01 postgres: wal
 writer process

  9308 pg92  15   0 6462m 4584 1740 S  0.0  0.0   0:04.94 postgres:
 autovacuum launcher process

 17766 pg92  16   0 6460m 3968 2340 S  0.0  0.0   0:00.00 postgres:
 sgn2 senai 10.1.3.1(34642) idle

 17767 pg92  16   0 6460m 3968 2340 S  0.0  0.0   0:00.00 postgres:
 sgn2 senai 10.1.3.1(62906) idle

 (many other conns)

 I don't know why this is happening. I'm stuck with all the same parameters
 from 9.1.4.

 The only solution right now is to rollback to the older version.

 Does someone got something like this before?

 Thanks,
 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-20 Thread Adrian Klaver

On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:

I think I didn't make it clear: the session memory usage is growing up
too fast, until all server memory got used and swap occurs.

Never saw something like that. The version is under a test enviroment
for a long time...

Thanks if someone could help me.


Before any one can help I would think more information is needed;

1) Is it on the same machine/OS as the old version?

2) What are the hardware specs for the machine?

3) Is it still in test mode or in production?

4) You seem to imply that in test mode everything worked alright, is 
that the case?


5) In either case, test/production, what is being done in the session(s)?

6) Is there anything in the Postgres logs that might shed light?




--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-20 Thread Daniel Cristian Cruz
2013/4/20 Adrian Klaver adrian.kla...@gmail.com

 On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:

 I think I didn't make it clear: the session memory usage is growing up
 too fast, until all server memory got used and swap occurs.

 Never saw something like that. The version is under a test enviroment
 for a long time...

 Thanks if someone could help me.


 Before any one can help I would think more information is needed;

 1) Is it on the same machine/OS as the old version?


Yes same machine, CentOS 5.5, just upgraded PostgreSQL only


 2) What are the hardware specs for the machine?


CISCO Blade, 96GB RAM, 24 cores


 3) Is it still in test mode or in production?


production sadly


 4) You seem to imply that in test mode everything worked alright, is that
 the case?


Yes, got two servers, never got the same issue


 5) In either case, test/production, what is being done in the session(s)?


Some complex queries. Some very complex queries...


 6) Is there anything in the Postgres logs that might shed light?


Unfortunelly, not at all. Just the same of previous version, some (more)
queries taking longer than 10 seconds.

Old config allowed to use 256MB on work_mem. I reduced to 24MB on new
version, but it still grew up strongly and fast.

Not a clue on why this is happening.





 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



 --
 Adrian Klaver
 adrian.kla...@gmail.com




-- 
Daniel Cristian Cruz
クルズ クリスチアン ダニエル


Re: [GENERAL] Memory usage after upgrade to 9.2.4

2013-04-20 Thread Adrian Klaver

On 04/20/2013 04:30 PM, Daniel Cristian Cruz wrote:




2013/4/20 Adrian Klaver adrian.kla...@gmail.com
mailto:adrian.kla...@gmail.com

On 04/20/2013 04:08 PM, Daniel Cristian Cruz wrote:

I think I didn't make it clear: the session memory usage is
growing up
too fast, until all server memory got used and swap occurs.

Never saw something like that. The version is under a test
enviroment
for a long time...

Thanks if someone could help me.


Before any one can help I would think more information is needed;

1) Is it on the same machine/OS as the old version?


Yes same machine, CentOS 5.5, just upgraded PostgreSQL only


How did you upgrade?
pg_upgrade
dump/restore



4) You seem to imply that in test mode everything worked alright, is
that the case?


Yes, got two servers, never got the same issue


So what difference is there between the test and production servers?




5) In either case, test/production, what is being done in the
session(s)?


Some complex queries. Some very complex queries...


Any chance to see an EXPLAIN ANALYZE for query on test machine vs 
production?










--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com mailto:adrian.kla...@gmail.com




--
Daniel Cristian Cruz
クルズ クリスチアン ダニエル



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-09 Thread Frank Broniewski
FYI 
http://freebsd.1045724.n5.nabble.com/Postgresql-related-memory-question-td5759467.html



Am 2012-11-07 10:28, schrieb Achilleas Mantzios:

On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:

Hey, this is really cool. I directly tried the script and there's a line
from the output that caught my eye:

   mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN

is this the shared buffers? I guess so, but I want to confirm my guess ...


Hmm, that would be ideal, (from an understanding perspective) but at least in 
my system (FreeBSD-8.3), no.

psql -q -t -c show shared_buffers | grep -v -e '^$' | awk '{print $1}'
3840MB

SYSTEM MEMORY INFORMATION:
mem_gap_vm:  +996843520 (950MB) [  5%] Memory gap: UNKNOWN

$mem_gap_vm  = $mem_all - ($mem_wire + $mem_active + $mem_inactive + $mem_cache 
+ $mem_free);

mem_all is some rounded and more rationalized version less than hw.physmem : $mem_all = 
$sysctl-{vm.stats.vm.v_page_count} * $sysctl-{hw.pagesize};

Anyway, this is not so postgresql related at the moment. The correct thing to 
do (since you run production servers on FreeBSD) is to post to the relevant
FreeBSD list and/or forum. freebsd-questi...@freebsd.org and 
freebsd-sta...@freebsd.org would be a good start.
Also the forums : http://forums.freebsd.org/forumdisplay.php?f=3
Only after gathering substantial info from there, would it make sense to come 
back here and maybe ask more questions.
And since we are observing different percentages of gaps (mine is 5%, yours is 
26%), i think maybe you should look into it on the FreeBSD camp.

Please drop the link to the relevant thread there, if you decide to do so.

I would like to follow this.

Thanx!



Frank

Am 2012-11-07 09:26, schrieb Achilleas Mantzios:

Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the output of 
top.  There are no memory leaks, and FreeBSD doesn't lose any memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT






-
Achilleas Mantzios
IT DEPT




--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-09 Thread Achilleas Mantzios
Thanx for the link.

I just think that it would be a good idea, instead of posting the links at this 
list, to include a
dense but detailed summary of the situation in your machine, and give as much 
data as possible.
In short, you might do a quantitative compilation of this thread, and present 
it in a nice way
in order to gain more attention.
Also, i think posting to -stable would be a better idea, -questions is for 
noobs.

On Παρ 09 Νοε 2012 09:37:14 Frank Broniewski wrote:
 FYI 
 http://freebsd.1045724.n5.nabble.com/Postgresql-related-memory-question-td5759467.html
 
 
 Am 2012-11-07 10:28, schrieb Achilleas Mantzios:
  On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:
  Hey, this is really cool. I directly tried the script and there's a line
  from the output that caught my eye:
 
 mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN
 
  is this the shared buffers? I guess so, but I want to confirm my guess ...
 
  Hmm, that would be ideal, (from an understanding perspective) but at least 
  in my system (FreeBSD-8.3), no.
 
  psql -q -t -c show shared_buffers | grep -v -e '^$' | awk '{print $1}'
  3840MB
 
  SYSTEM MEMORY INFORMATION:
  mem_gap_vm:  +996843520 (950MB) [  5%] Memory gap: UNKNOWN
 
  $mem_gap_vm  = $mem_all - ($mem_wire + $mem_active + $mem_inactive + 
  $mem_cache + $mem_free);
 
  mem_all is some rounded and more rationalized version less than hw.physmem 
  : $mem_all = $sysctl-{vm.stats.vm.v_page_count} * 
  $sysctl-{hw.pagesize};
 
  Anyway, this is not so postgresql related at the moment. The correct thing 
  to do (since you run production servers on FreeBSD) is to post to the 
  relevant
  FreeBSD list and/or forum. freebsd-questi...@freebsd.org and 
  freebsd-sta...@freebsd.org would be a good start.
  Also the forums : http://forums.freebsd.org/forumdisplay.php?f=3
  Only after gathering substantial info from there, would it make sense to 
  come back here and maybe ask more questions.
  And since we are observing different percentages of gaps (mine is 5%, yours 
  is 26%), i think maybe you should look into it on the FreeBSD camp.
 
  Please drop the link to the relevant thread there, if you decide to do so.
 
  I would like to follow this.
 
  Thanx!
 
 
  Frank
 
  Am 2012-11-07 09:26, schrieb Achilleas Mantzios:
  Vick,
  fantastic script, thanx! FreeBSD sysctl system is awesome!
 
  On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:
 
 
 
  On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:
 
  and this is after a few hours of running:
 
  Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
  Swap: 4096M Total, 828K Used, 4095M Free
 
 
 
 
  For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, 
  with Postgres 9.0.7 running since June 10, and is heavily pounded on 
  24x7.  The data + indexes are about 240GB on disk.  This server only runs 
  postgres aside from the basic system processes.
 
 
  Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M 
  Free
  Swap: 4096M Total, 272K Used, 4096M Free
 
 
  I agree with the conclusion that the shared memory segments are confusing 
  the output of top.  There are no memory leaks, and FreeBSD doesn't lose 
  any memory.
 
 
  There are some scripts floating around that read values from sysctl 
  vm.stats.vm and format them nicely to tell you how much memory is used up 
  and free.  Try the one referenced here: 
  http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/
 
 
 
 
 
  -
  Achilleas Mantzios
  IT DEPT
 
 
 
 
  -
  Achilleas Mantzios
  IT DEPT
 
 
 
 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Achilleas Mantzios
Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the 
output of top.  There are no memory leaks, and FreeBSD doesn't lose any 
memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT

Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Frank Broniewski
Hey, this is really cool. I directly tried the script and there's a line 
from the output that caught my eye:


 mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN

is this the shared buffers? I guess so, but I want to confirm my guess ...

Frank

Am 2012-11-07 09:26, schrieb Achilleas Mantzios:

Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the output of 
top.  There are no memory leaks, and FreeBSD doesn't lose any memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT




--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Achilleas Mantzios
On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:
 Hey, this is really cool. I directly tried the script and there's a line 
 from the output that caught my eye:
 
   mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN
 
 is this the shared buffers? I guess so, but I want to confirm my guess ...

Hmm, that would be ideal, (from an understanding perspective) but at least in 
my system (FreeBSD-8.3), no. 

psql -q -t -c show shared_buffers | grep -v -e '^$' | awk '{print $1}'
3840MB

SYSTEM MEMORY INFORMATION:
mem_gap_vm:  +996843520 (950MB) [  5%] Memory gap: UNKNOWN

$mem_gap_vm  = $mem_all - ($mem_wire + $mem_active + $mem_inactive + $mem_cache 
+ $mem_free);

mem_all is some rounded and more rationalized version less than hw.physmem : 
$mem_all = $sysctl-{vm.stats.vm.v_page_count} * $sysctl-{hw.pagesize};

Anyway, this is not so postgresql related at the moment. The correct thing to 
do (since you run production servers on FreeBSD) is to post to the relevant 
FreeBSD list and/or forum. freebsd-questi...@freebsd.org and 
freebsd-sta...@freebsd.org would be a good start. 
Also the forums : http://forums.freebsd.org/forumdisplay.php?f=3
Only after gathering substantial info from there, would it make sense to come 
back here and maybe ask more questions.
And since we are observing different percentages of gaps (mine is 5%, yours is 
26%), i think maybe you should look into it on the FreeBSD camp.

Please drop the link to the relevant thread there, if you decide to do so.

I would like to follow this.

Thanx!

 
 Frank
 
 Am 2012-11-07 09:26, schrieb Achilleas Mantzios:
  Vick,
  fantastic script, thanx! FreeBSD sysctl system is awesome!
 
  On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:
 
 
 
  On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:
 
  and this is after a few hours of running:
 
  Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
  Swap: 4096M Total, 828K Used, 4095M Free
 
 
 
 
  For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
  Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The 
  data + indexes are about 240GB on disk.  This server only runs postgres 
  aside from the basic system processes.
 
 
  Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
  Swap: 4096M Total, 272K Used, 4096M Free
 
 
  I agree with the conclusion that the shared memory segments are confusing 
  the output of top.  There are no memory leaks, and FreeBSD doesn't lose 
  any memory.
 
 
  There are some scripts floating around that read values from sysctl 
  vm.stats.vm and format them nicely to tell you how much memory is used up 
  and free.  Try the one referenced here: 
  http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/
 
 
 
 
 
  -
  Achilleas Mantzios
  IT DEPT
 
 
 
 
-
Achilleas Mantzios
IT DEPT

Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Achilleas Mantzios
Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the 
output of top.  There are no memory leaks, and FreeBSD doesn't lose any 
memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT

Re: [GENERAL] Memory issue on FreeBSD

2012-11-06 Thread Vick Khera
On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski b...@metrico.lu wrote:

 and this is after a few hours of running:

 Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
 Swap: 4096M Total, 828K Used, 4095M Free


For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The
data + indexes are about 240GB on disk.  This server only runs postgres
aside from the basic system processes.

Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free

I agree with the conclusion that the shared memory segments are confusing
the output of top.  There are no memory leaks, and FreeBSD doesn't lose
any memory.

There are some scripts floating around that read values from sysctl
vm.stats.vm and format them nicely to tell you how much memory is used up
and free.  Try the one referenced here:
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/


[GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Frank Broniewski

Hi,

I am running a PostgreSQL server on FreeBSD. The system has 32GB memory. 
Usually I use top to examine the memory usage of the system. After a 
while, a part, approximately 5GB, vanish from top, so that the memory 
rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again 
available, but then it's already slightly decreasing. It's a standalone 
database server. It has an OpenStreetMap world database running with 
353GB data (with indices).


Some system information:
# uname -r
9.0-RELEASE-p3
# pg_ctl --version
pg_ctl (PostgreSQL) 9.1.6

# cat /boot/loader.conf
...
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
kern.ipc.semumr=200
vm.pmap.shpgperproc=400
vm.pmap.pv_entry_max=50331648
...

# cat /pgdata/data/postgresql.conf
...
default_statistics_target = 50 # pgtune wizard 2012-04-04
maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
constraint_exclusion = on # pgtune wizard 2012-04-04
checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
effective_cache_size = 24GB # pgtune wizard 2012-04-04
work_mem = 768MB # pgtune wizard 2012-04-04
wal_buffers = 16MB # pgtune wizard 2012-04-04
checkpoint_segments = 60 # 20
shared_buffers = 8GB # pgtune wizard 2012-04-04
max_connections = 100
synchronous_commit = off


So any help finding out why my system looses some RAM is greatly 
appreciated :-) If more information is needed I will gladly provide it.


Frank



--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   >