Re: [GENERAL] Memory Issue with array_agg?

2013-08-21 Thread Robert Sosinski
Hi Pavel,

Here are the explains you asked for:

explain analyze select string_agg(id::text,',') from things group by guid;
QUERY PLAN

--
 GroupAggregate  (cost=400357.78..433784.93 rows=1337086 width=37) (actual
time=41434.485..53195.185 rows=2378626 loops=1)
   ->  Sort  (cost=400357.78..403700.49 rows=1337086 width=37) (actual
time=41434.433..44992.736 rows=2378626 loops=1)
 Sort Key: guid
 Sort Method: quicksort  Memory: 284135kB
 ->  Seq Scan on things  (cost=0.00..264304.86 rows=1337086
width=37) (actual time=0.027..21429.179 rows=2378626 loops=1)
 Total runtime: 56295.362 ms
(6 rows)


explain analyze select array_agg(id::text) from things group by guid;
   QUERY PLAN

-
 GroupAggregate  (cost=400357.78..433784.93 rows=1337086 width=37) (actual
time=23953.922..38157.059 rows=2378626 loops=1)
   ->  Sort  (cost=400357.78..403700.49 rows=1337086 width=37) (actual
time=23953.847..27527.316 rows=2378626 loops=1)
 Sort Key: guid
 Sort Method: quicksort  Memory: 284135kB
 ->  Seq Scan on things  (cost=0.00..264304.86 rows=1337086
width=37) (actual time=0.007..4941.752 rows=2378626 loops=1)
 Total runtime: 41280.897 ms
(6 rows)



These seem to be running on the machine now, and the memory is not
inflating, I just run this one, and it blew up.

explain with t as (select id, guid, md5(concat_ws(':', fields -> 'a',
fields -> 'b', fields -> 'c', fields -> 'd', fields -> 'e', foo_id::text))
from things) select md5, count(id), array_agg(id) from t group by 1 having
count(id) > 1;

-Robert


On Tue, Aug 20, 2013 at 1:53 PM, Pavel Stehule wrote:

> Can you send a EXPLAIN result in both use cases?
>
> Pavel
>
>
> 2013/8/19 Robert Sosinski 
>
>> At the moment, all guids are distinct, however before I zapped the
>> duplicates, there were 280 duplicates.
>>
>> Currently, there are over 2 million distinct guids.
>>
>> -Robert
>>
>>
>> On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
>> wrote:
>>
>>>
>>>
>>>
>>> 2013/8/19 Robert Sosinski 
>>>
 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I
 have in the table, but I can give you an example query and the schema
 attached below.  From there, I would just put in 2 million rows worth 1.2
 Gigs of data.  Average size of the the extended columns (using the
 pg_column_size function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;

>>>
>>> how much distinct guid is there, and how much duplicates
>>>
>>> ??
>>>
>>> regards
>>>
>>> Pavel
>>>
>>>
>>>

 example schema:
  Table "public.things"

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
  | plain|  |
  created_at | timestamp without time zone | not null
  | plain|  |
  updated_at | timestamp without time zone | not null
  | plain|  |
  foo_id | integer | not null
  | plain|  |
  bar_id | integer | not null
  | plain|  |
  baz_id | integer | not null
  | plain|  |
  guid   | character varying   | not null
  | extended |  |
  name   | character varying   | not null
  | extended |  |
  price  | numeric(12,2)   | not null
  | main |  |
  currency   | character varying   | not null
  | extended |  |
  amount | integer | not null
  | plain|  |
  the_date   | date| not null
  | plain|  |
  fields | hstore   

Re: [GENERAL] Memory Issue with array_agg?

2013-08-20 Thread Pavel Stehule
Can you send a EXPLAIN result in both use cases?

Pavel


2013/8/19 Robert Sosinski 

> At the moment, all guids are distinct, however before I zapped the
> duplicates, there were 280 duplicates.
>
> Currently, there are over 2 million distinct guids.
>
> -Robert
>
>
> On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
> wrote:
>
>>
>>
>>
>> 2013/8/19 Robert Sosinski 
>>
>>> Hi Pavel,
>>>
>>> What kind of example do you need?  I cant give you the actual data I
>>> have in the table, but I can give you an example query and the schema
>>> attached below.  From there, I would just put in 2 million rows worth 1.2
>>> Gigs of data.  Average size of the the extended columns (using the
>>> pg_column_size function) in bytes are:
>>>
>>> guid: 33
>>> name: 2.41
>>> currency: 4
>>> fields: 120.32
>>>
>>> example query:
>>>
>>> -- find duplicate records using a guid
>>> select guid, array_agg(id) from orders group by guid;
>>>
>>
>> how much distinct guid is there, and how much duplicates
>>
>> ??
>>
>> regards
>>
>> Pavel
>>
>>
>>
>>>
>>> example schema:
>>>  Table "public.things"
>>>
>>>Column   |Type |
>>>  Modifiers  | Storage  | Stats target | Description
>>>
>>> +-+-+--+--+-
>>>  id | integer | not null default
>>> nextval('things_id_seq'::regclass) | plain|  |
>>>  version| integer | not null
>>>| plain|  |
>>>  created_at | timestamp without time zone | not null
>>>| plain|  |
>>>  updated_at | timestamp without time zone | not null
>>>| plain|  |
>>>  foo_id | integer | not null
>>>| plain|  |
>>>  bar_id | integer | not null
>>>| plain|  |
>>>  baz_id | integer | not null
>>>| plain|  |
>>>  guid   | character varying   | not null
>>>| extended |  |
>>>  name   | character varying   | not null
>>>| extended |  |
>>>  price  | numeric(12,2)   | not null
>>>| main |  |
>>>  currency   | character varying   | not null
>>>| extended |  |
>>>  amount | integer | not null
>>>| plain|  |
>>>  the_date   | date| not null
>>>| plain|  |
>>>  fields | hstore  |
>>> | extended |  |
>>> Indexes:
>>> "things_pkey" PRIMARY KEY, btree (id)
>>> "things_foo_id_idx" btree (foo_id)
>>> "things_bar_id_idx" btree (bar_id)
>>> "things_baz_id_idx" btree (baz_id)
>>> "things_guid_uidx" UNIQUE, btree (guid)
>>> "things_lpad_lower_name_eidx" btree (lpad(lower(name::text), 10,
>>> '0'::text))
>>> "things_price_idx" btree (price)
>>>
>>> Foreign-key constraints:
>>> "things_foo_id_fkey" FOREIGN KEY (foo_id) REFERENCES foos(id)
>>> "things_bar_id_fkey" FOREIGN KEY (bar_id) REFERENCES bars(id)
>>> "things_baz_id_fkey" FOREIGN KEY (baz_id) REFERENCES bazs(id)
>>> Triggers:
>>> timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW
>>> EXECUTE PROCEDURE timestamps_tfun()
>>>
>>> Let me know if you need anything else.
>>>
>>> Thanks,
>>>
>>>
>>> On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule 
>>> wrote:
>>>
 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski 

> When using array_agg on a large table, memory usage seems to spike up
> until Postgres crashes with the following error:
>
> 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
> because of crash of another server process
> 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has
> commanded this server process to roll back the current transaction and
> exit, because another server process exited abnormally and possibly
> corrupted shared memory.
> 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be
> able to reconnect to the database and repeat your command.
>
> I've definitely isolated it down to using array_agg, as when I changed
> the query to use string_agg, it worked fine.  I also tried using array_agg
> on a few different queries, all yielding the same issue.  Swapping in
> string_agg fixed the issue once more.
>
> This particular table has over 2 million rows and is 

Re: [GENERAL] Memory Issue with array_agg?

2013-08-20 Thread Pavel Stehule
Hello

It is strange. I am trying to simulate it without success. On 1 M rows
where every id is 2 times duplicated

processing string_agg .. cca 30MB

processing array_agg cca 32MB

postgres=# create table foo(a int, b varchar);
CREATE TABLE
postgres=# insert into foo select i, md5(i::text) from
generate_series(1,100) g(i);
INSERT 0 100
postgres=# insert into foo select i, md5(i::text) from
generate_series(1,100) g(i);
INSERT 0 100
postgres=# CREATE INDEX on foo(b);
CREATE INDEX
postgres=# ANALYZE foo;
ANALYZE
postgres=# explain analyze select string_agg(a::text,',') from foo group by
b;
 QUERY
PLAN

 GroupAggregate  (cost=410045.19..447831.37 rows=1022895 width=37) (actual
time=10195.972..14993.493 rows=100 loops=1)
   ->  Sort  (cost=410045.19..415045.19 rows=200 width=37) (actual
time=10195.944..13659.985 rows=200 loops=1)
 Sort Key: b
 Sort Method: external merge  Disk: 97768kB
 ->  Seq Scan on foo  (cost=0.00..36667.00 rows=200 width=37)
(actual time=0.018..321.197 rows=200 loops=1)
 Total runtime: 15066.397 ms
(6 rows)

postgres=# explain analyze select array_agg(a::text) from foo group by b;
 QUERY
PLAN

 GroupAggregate  (cost=410045.19..447831.37 rows=1022895 width=37) (actual
time=10062.095..15697.755 rows=100 loops=1)
   ->  Sort  (cost=410045.19..415045.19 rows=200 width=37) (actual
time=10062.059..13613.300 rows=200 loops=1)
 Sort Key: b
 Sort Method: external merge  Disk: 97768kB
 ->  Seq Scan on foo  (cost=0.00..36667.00 rows=200 width=37)
(actual time=0.029..311.423 rows=200 loops=1)
 Total runtime: 15799.226 ms
(6 rows)

Regards

Pavel


2013/8/19 Robert Sosinski 

> At the moment, all guids are distinct, however before I zapped the
> duplicates, there were 280 duplicates.
>
> Currently, there are over 2 million distinct guids.
>
> -Robert
>
>
> On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
> wrote:
>
>>
>>
>>
>> 2013/8/19 Robert Sosinski 
>>
>>> Hi Pavel,
>>>
>>> What kind of example do you need?  I cant give you the actual data I
>>> have in the table, but I can give you an example query and the schema
>>> attached below.  From there, I would just put in 2 million rows worth 1.2
>>> Gigs of data.  Average size of the the extended columns (using the
>>> pg_column_size function) in bytes are:
>>>
>>> guid: 33
>>> name: 2.41
>>> currency: 4
>>> fields: 120.32
>>>
>>> example query:
>>>
>>> -- find duplicate records using a guid
>>> select guid, array_agg(id) from orders group by guid;
>>>
>>
>> how much distinct guid is there, and how much duplicates
>>
>> ??
>>
>> regards
>>
>> Pavel
>>
>>
>>
>>>
>>> example schema:
>>>  Table "public.things"
>>>
>>>Column   |Type |
>>>  Modifiers  | Storage  | Stats target | Description
>>>
>>> +-+-+--+--+-
>>>  id | integer | not null default
>>> nextval('things_id_seq'::regclass) | plain|  |
>>>  version| integer | not null
>>>| plain|  |
>>>  created_at | timestamp without time zone | not null
>>>| plain|  |
>>>  updated_at | timestamp without time zone | not null
>>>| plain|  |
>>>  foo_id | integer | not null
>>>| plain|  |
>>>  bar_id | integer | not null
>>>| plain|  |
>>>  baz_id | integer | not null
>>>| plain|  |
>>>  guid   | character varying   | not null
>>>| extended |  |
>>>  name   | character varying   | not null
>>>| extended |  |
>>>  price  | numeric(12,2)   | not null
>>>| main |  |
>>>  currency   | character varying   | not null
>>>| extended |  |
>>>  amount | integer | not null
>>>| plain|  |
>>>  the_date   | date| not null
>>>| plain|  |
>>>  fields | hstore  |
>>> | extended |  |
>>> Indexes:
>>> "things_pkey" PRI

Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Robert Sosinski
At the moment, all guids are distinct, however before I zapped the
duplicates, there were 280 duplicates.

Currently, there are over 2 million distinct guids.

-Robert


On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule wrote:

>
>
>
> 2013/8/19 Robert Sosinski 
>
>> Hi Pavel,
>>
>> What kind of example do you need?  I cant give you the actual data I have
>> in the table, but I can give you an example query and the schema attached
>> below.  From there, I would just put in 2 million rows worth 1.2 Gigs of
>> data.  Average size of the the extended columns (using the pg_column_size
>> function) in bytes are:
>>
>> guid: 33
>> name: 2.41
>> currency: 4
>> fields: 120.32
>>
>> example query:
>>
>> -- find duplicate records using a guid
>> select guid, array_agg(id) from orders group by guid;
>>
>
> how much distinct guid is there, and how much duplicates
>
> ??
>
> regards
>
> Pavel
>
>
>
>>
>> example schema:
>>  Table "public.things"
>>
>>Column   |Type |
>>  Modifiers  | Storage  | Stats target | Description
>>
>> +-+-+--+--+-
>>  id | integer | not null default
>> nextval('things_id_seq'::regclass) | plain|  |
>>  version| integer | not null
>>| plain|  |
>>  created_at | timestamp without time zone | not null
>>| plain|  |
>>  updated_at | timestamp without time zone | not null
>>| plain|  |
>>  foo_id | integer | not null
>>| plain|  |
>>  bar_id | integer | not null
>>| plain|  |
>>  baz_id | integer | not null
>>| plain|  |
>>  guid   | character varying   | not null
>>| extended |  |
>>  name   | character varying   | not null
>>| extended |  |
>>  price  | numeric(12,2)   | not null
>>| main |  |
>>  currency   | character varying   | not null
>>| extended |  |
>>  amount | integer | not null
>>| plain|  |
>>  the_date   | date| not null
>>| plain|  |
>>  fields | hstore  |
>>   | extended |  |
>> Indexes:
>> "things_pkey" PRIMARY KEY, btree (id)
>> "things_foo_id_idx" btree (foo_id)
>> "things_bar_id_idx" btree (bar_id)
>> "things_baz_id_idx" btree (baz_id)
>> "things_guid_uidx" UNIQUE, btree (guid)
>> "things_lpad_lower_name_eidx" btree (lpad(lower(name::text), 10,
>> '0'::text))
>> "things_price_idx" btree (price)
>>
>> Foreign-key constraints:
>> "things_foo_id_fkey" FOREIGN KEY (foo_id) REFERENCES foos(id)
>> "things_bar_id_fkey" FOREIGN KEY (bar_id) REFERENCES bars(id)
>> "things_baz_id_fkey" FOREIGN KEY (baz_id) REFERENCES bazs(id)
>> Triggers:
>> timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW
>> EXECUTE PROCEDURE timestamps_tfun()
>>
>> Let me know if you need anything else.
>>
>> Thanks,
>>
>>
>> On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule 
>> wrote:
>>
>>> Hello
>>>
>>> please, can you send some example or test?
>>>
>>> Regards
>>>
>>> Pavel Stehule
>>>
>>>
>>> 2013/8/19 Robert Sosinski 
>>>
 When using array_agg on a large table, memory usage seems to spike up
 until Postgres crashes with the following error:

 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
 because of crash of another server process
 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has
 commanded this server process to roll back the current transaction and
 exit, because another server process exited abnormally and possibly
 corrupted shared memory.
 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be
 able to reconnect to the database and repeat your command.

 I've definitely isolated it down to using array_agg, as when I changed
 the query to use string_agg, it worked fine.  I also tried using array_agg
 on a few different queries, all yielding the same issue.  Swapping in
 string_agg fixed the issue once more.

 This particular table has over 2 million rows and is 1.2 Gigs, and when
 I ran the query while viewing htop, the virtual size of the Postgres
 process ballooned to 13.9G until crashing.

 The version of Postgres I am using is: PostgreSQL 9.2.4 on
 x86_64-unknown-l

Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Pavel Stehule
2013/8/19 Robert Sosinski 

> Hi Pavel,
>
> What kind of example do you need?  I cant give you the actual data I have
> in the table, but I can give you an example query and the schema attached
> below.  From there, I would just put in 2 million rows worth 1.2 Gigs of
> data.  Average size of the the extended columns (using the pg_column_size
> function) in bytes are:
>
> guid: 33
> name: 2.41
> currency: 4
> fields: 120.32
>
> example query:
>
> -- find duplicate records using a guid
> select guid, array_agg(id) from orders group by guid;
>

how much distinct guid is there, and how much duplicates

??

regards

Pavel



>
> example schema:
>  Table "public.things"
>
>Column   |Type |  Modifiers
>  | Storage  | Stats target | Description
>
> +-+-+--+--+-
>  id | integer | not null default
> nextval('things_id_seq'::regclass) | plain|  |
>  version| integer | not null
>  | plain|  |
>  created_at | timestamp without time zone | not null
>  | plain|  |
>  updated_at | timestamp without time zone | not null
>  | plain|  |
>  foo_id | integer | not null
>  | plain|  |
>  bar_id | integer | not null
>  | plain|  |
>  baz_id | integer | not null
>  | plain|  |
>  guid   | character varying   | not null
>  | extended |  |
>  name   | character varying   | not null
>  | extended |  |
>  price  | numeric(12,2)   | not null
>  | main |  |
>  currency   | character varying   | not null
>  | extended |  |
>  amount | integer | not null
>  | plain|  |
>  the_date   | date| not null
>  | plain|  |
>  fields | hstore  |
>   | extended |  |
> Indexes:
> "things_pkey" PRIMARY KEY, btree (id)
> "things_foo_id_idx" btree (foo_id)
> "things_bar_id_idx" btree (bar_id)
> "things_baz_id_idx" btree (baz_id)
> "things_guid_uidx" UNIQUE, btree (guid)
> "things_lpad_lower_name_eidx" btree (lpad(lower(name::text), 10,
> '0'::text))
> "things_price_idx" btree (price)
>
> Foreign-key constraints:
> "things_foo_id_fkey" FOREIGN KEY (foo_id) REFERENCES foos(id)
> "things_bar_id_fkey" FOREIGN KEY (bar_id) REFERENCES bars(id)
> "things_baz_id_fkey" FOREIGN KEY (baz_id) REFERENCES bazs(id)
> Triggers:
> timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW EXECUTE
> PROCEDURE timestamps_tfun()
>
> Let me know if you need anything else.
>
> Thanks,
>
>
> On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule wrote:
>
>> Hello
>>
>> please, can you send some example or test?
>>
>> Regards
>>
>> Pavel Stehule
>>
>>
>> 2013/8/19 Robert Sosinski 
>>
>>> When using array_agg on a large table, memory usage seems to spike up
>>> until Postgres crashes with the following error:
>>>
>>> 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
>>> because of crash of another server process
>>> 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
>>> this server process to roll back the current transaction and exit, because
>>> another server process exited abnormally and possibly corrupted shared
>>> memory.
>>> 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able
>>> to reconnect to the database and repeat your command.
>>>
>>> I've definitely isolated it down to using array_agg, as when I changed
>>> the query to use string_agg, it worked fine.  I also tried using array_agg
>>> on a few different queries, all yielding the same issue.  Swapping in
>>> string_agg fixed the issue once more.
>>>
>>> This particular table has over 2 million rows and is 1.2 Gigs, and when
>>> I ran the query while viewing htop, the virtual size of the Postgres
>>> process ballooned to 13.9G until crashing.
>>>
>>> The version of Postgres I am using is: PostgreSQL 9.2.4 on
>>> x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
>>> 4.7.2, 64-bit
>>>
>>> Any help would be much appreciated, thanks!
>>>
>>> -Robert
>>>
>>
>>
>


Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Robert Sosinski
Hi Pavel,

What kind of example do you need?  I cant give you the actual data I have
in the table, but I can give you an example query and the schema attached
below.  From there, I would just put in 2 million rows worth 1.2 Gigs of
data.  Average size of the the extended columns (using the pg_column_size
function) in bytes are:

guid: 33
name: 2.41
currency: 4
fields: 120.32

example query:

-- find duplicate records using a guid
select guid, array_agg(id) from orders group by guid;

example schema:
 Table "public.things"

   Column   |Type |  Modifiers
 | Storage  | Stats target | Description
+-+-+--+--+-
 id | integer | not null default
nextval('things_id_seq'::regclass) | plain|  |
 version| integer | not null
 | plain|  |
 created_at | timestamp without time zone | not null
 | plain|  |
 updated_at | timestamp without time zone | not null
 | plain|  |
 foo_id | integer | not null
 | plain|  |
 bar_id | integer | not null
 | plain|  |
 baz_id | integer | not null
 | plain|  |
 guid   | character varying   | not null
 | extended |  |
 name   | character varying   | not null
 | extended |  |
 price  | numeric(12,2)   | not null
 | main |  |
 currency   | character varying   | not null
 | extended |  |
 amount | integer | not null
 | plain|  |
 the_date   | date| not null
 | plain|  |
 fields | hstore  |
| extended |  |
Indexes:
"things_pkey" PRIMARY KEY, btree (id)
"things_foo_id_idx" btree (foo_id)
"things_bar_id_idx" btree (bar_id)
"things_baz_id_idx" btree (baz_id)
"things_guid_uidx" UNIQUE, btree (guid)
"things_lpad_lower_name_eidx" btree (lpad(lower(name::text), 10,
'0'::text))
"things_price_idx" btree (price)

Foreign-key constraints:
"things_foo_id_fkey" FOREIGN KEY (foo_id) REFERENCES foos(id)
"things_bar_id_fkey" FOREIGN KEY (bar_id) REFERENCES bars(id)
"things_baz_id_fkey" FOREIGN KEY (baz_id) REFERENCES bazs(id)
Triggers:
timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW EXECUTE
PROCEDURE timestamps_tfun()

Let me know if you need anything else.

Thanks,


On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule wrote:

> Hello
>
> please, can you send some example or test?
>
> Regards
>
> Pavel Stehule
>
>
> 2013/8/19 Robert Sosinski 
>
>> When using array_agg on a large table, memory usage seems to spike up
>> until Postgres crashes with the following error:
>>
>> 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
>> because of crash of another server process
>> 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
>> this server process to roll back the current transaction and exit, because
>> another server process exited abnormally and possibly corrupted shared
>> memory.
>> 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able
>> to reconnect to the database and repeat your command.
>>
>> I've definitely isolated it down to using array_agg, as when I changed
>> the query to use string_agg, it worked fine.  I also tried using array_agg
>> on a few different queries, all yielding the same issue.  Swapping in
>> string_agg fixed the issue once more.
>>
>> This particular table has over 2 million rows and is 1.2 Gigs, and when I
>> ran the query while viewing htop, the virtual size of the Postgres
>> process ballooned to 13.9G until crashing.
>>
>> The version of Postgres I am using is: PostgreSQL 9.2.4 on
>> x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
>> 4.7.2, 64-bit
>>
>> Any help would be much appreciated, thanks!
>>
>> -Robert
>>
>
>


Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Pavel Stehule
Hello

please, can you send some example or test?

Regards

Pavel Stehule


2013/8/19 Robert Sosinski 

> When using array_agg on a large table, memory usage seems to spike up
> until Postgres crashes with the following error:
>
> 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
> because of crash of another server process
> 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
> this server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
> 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able
> to reconnect to the database and repeat your command.
>
> I've definitely isolated it down to using array_agg, as when I changed the
> query to use string_agg, it worked fine.  I also tried using array_agg on a
> few different queries, all yielding the same issue.  Swapping in string_agg
> fixed the issue once more.
>
> This particular table has over 2 million rows and is 1.2 Gigs, and when I
> ran the query while viewing htop, the virtual size of the Postgres
> process ballooned to 13.9G until crashing.
>
> The version of Postgres I am using is: PostgreSQL 9.2.4 on
> x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
> 4.7.2, 64-bit
>
> Any help would be much appreciated, thanks!
>
> -Robert
>


Re: [GENERAL] Memory Issue with array_agg?

2013-08-18 Thread Christoph Moench-Tegeder
## Robert Sosinski (rsosin...@ticketevolution.com):

> When using array_agg on a large table, memory usage seems to spike up until
> Postgres crashes with the following error:

This sounds like bug #7916.
http://www.postgresql.org/message-id/e1uceeu-0004hy...@wrigleys.postgresql.org
As noted in that thread, changing the AllocSet parameters for
acummArrayResult may help (call to AllocSetContextCreate() in
ArrayBuildState *accumArrayResult(), src/backend/utils/adt/arrayfuncs.c,
change ALLOCSET_DEFAULT_*SIZE to ALLOCSET_SMALL_*SIZE).
Also, lowering work_mem may help, depending on your SQL.

Regards,
Christoph

-- 
Spare Space


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory Issue with array_agg?

2013-08-18 Thread Robert Sosinski
When using array_agg on a large table, memory usage seems to spike up until
Postgres crashes with the following error:

2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection because
of crash of another server process
2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has commanded
this server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be able to
reconnect to the database and repeat your command.

I've definitely isolated it down to using array_agg, as when I changed the
query to use string_agg, it worked fine.  I also tried using array_agg on a
few different queries, all yielding the same issue.  Swapping in string_agg
fixed the issue once more.

This particular table has over 2 million rows and is 1.2 Gigs, and when I
ran the query while viewing htop, the virtual size of the Postgres process
ballooned to 13.9G until crashing.

The version of Postgres I am using is: PostgreSQL 9.2.4 on
x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.7.2-2ubuntu1)
4.7.2, 64-bit

Any help would be much appreciated, thanks!

-Robert


Re: [GENERAL] Memory issue on FreeBSD

2012-11-09 Thread Achilleas Mantzios
Thanx for the link.

I just think that it would be a good idea, instead of posting the links at this 
list, to include a
dense but detailed summary of the situation in your machine, and give as much 
data as possible.
In short, you might do a quantitative compilation of this thread, and present 
it in a nice way
in order to gain more attention.
Also, i think posting to -stable would be a better idea, -questions is for 
noobs.

On Παρ 09 Νοε 2012 09:37:14 Frank Broniewski wrote:
> FYI 
> http://freebsd.1045724.n5.nabble.com/Postgresql-related-memory-question-td5759467.html
> 
> 
> Am 2012-11-07 10:28, schrieb Achilleas Mantzios:
> > On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:
> >> Hey, this is really cool. I directly tried the script and there's a line
> >> from the output that caught my eye:
> >>
> >>   > mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN
> >>
> >> is this the shared buffers? I guess so, but I want to confirm my guess ...
> >
> > Hmm, that would be ideal, (from an understanding perspective) but at least 
> > in my system (FreeBSD-8.3), no.
> >
> > psql -q -t -c "show shared_buffers" | grep -v -e '^$' | awk '{print $1}'
> > 3840MB
> >
> > SYSTEM MEMORY INFORMATION:
> > mem_gap_vm:  +996843520 (950MB) [  5%] Memory gap: UNKNOWN
> >
> > $mem_gap_vm  = $mem_all - ($mem_wire + $mem_active + $mem_inactive + 
> > $mem_cache + $mem_free);
> >
> > mem_all is some rounded and more rationalized version less than hw.physmem 
> > : $mem_all = $sysctl->{"vm.stats.vm.v_page_count"} * 
> > $sysctl->{"hw.pagesize"};
> >
> > Anyway, this is not so postgresql related at the moment. The correct thing 
> > to do (since you run production servers on FreeBSD) is to post to the 
> > relevant
> > FreeBSD list and/or forum. freebsd-questi...@freebsd.org and 
> > freebsd-sta...@freebsd.org would be a good start.
> > Also the forums : http://forums.freebsd.org/forumdisplay.php?f=3
> > Only after gathering substantial info from there, would it make sense to 
> > come back here and maybe ask more questions.
> > And since we are observing different percentages of gaps (mine is 5%, yours 
> > is 26%), i think maybe you should look into it on the FreeBSD camp.
> >
> > Please drop the link to the relevant thread there, if you decide to do so.
> >
> > I would like to follow this.
> >
> > Thanx!
> >
> >>
> >> Frank
> >>
> >> Am 2012-11-07 09:26, schrieb Achilleas Mantzios:
> >>> Vick,
> >>> fantastic script, thanx! FreeBSD sysctl system is awesome!
> >>>
> >>> On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:
> >>>
> >>>
> >>>
> >>> On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski  wrote:
> >>>
> >>> and this is after a few hours of running:
> >>>
> >>> Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
> >>> Swap: 4096M Total, 828K Used, 4095M Free
> >>>
> >>>
> >>>
> >>>
> >>> For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, 
> >>> with Postgres 9.0.7 running since June 10, and is heavily pounded on 
> >>> 24x7.  The data + indexes are about 240GB on disk.  This server only runs 
> >>> postgres aside from the basic system processes.
> >>>
> >>>
> >>> Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M 
> >>> Free
> >>> Swap: 4096M Total, 272K Used, 4096M Free
> >>>
> >>>
> >>> I agree with the conclusion that the shared memory segments are confusing 
> >>> the output of top.  There are no memory leaks, and FreeBSD doesn't "lose" 
> >>> any memory.
> >>>
> >>>
> >>> There are some scripts floating around that read values from sysctl 
> >>> vm.stats.vm and format them nicely to tell you how much memory is used up 
> >>> and free.  Try the one referenced here: 
> >>> http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> -
> >>> Achilleas Mantzios
> >>> IT DEPT
> >>>
> >>
> >>
> >>
> > -
> > Achilleas Mantzios
> > IT DEPT
> >
> 
> 
> 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-09 Thread Frank Broniewski
FYI 
http://freebsd.1045724.n5.nabble.com/Postgresql-related-memory-question-td5759467.html



Am 2012-11-07 10:28, schrieb Achilleas Mantzios:

On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:

Hey, this is really cool. I directly tried the script and there's a line
from the output that caught my eye:

  > mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN

is this the shared buffers? I guess so, but I want to confirm my guess ...


Hmm, that would be ideal, (from an understanding perspective) but at least in 
my system (FreeBSD-8.3), no.

psql -q -t -c "show shared_buffers" | grep -v -e '^$' | awk '{print $1}'
3840MB

SYSTEM MEMORY INFORMATION:
mem_gap_vm:  +996843520 (950MB) [  5%] Memory gap: UNKNOWN

$mem_gap_vm  = $mem_all - ($mem_wire + $mem_active + $mem_inactive + $mem_cache 
+ $mem_free);

mem_all is some rounded and more rationalized version less than hw.physmem : $mem_all = 
$sysctl->{"vm.stats.vm.v_page_count"} * $sysctl->{"hw.pagesize"};

Anyway, this is not so postgresql related at the moment. The correct thing to 
do (since you run production servers on FreeBSD) is to post to the relevant
FreeBSD list and/or forum. freebsd-questi...@freebsd.org and 
freebsd-sta...@freebsd.org would be a good start.
Also the forums : http://forums.freebsd.org/forumdisplay.php?f=3
Only after gathering substantial info from there, would it make sense to come 
back here and maybe ask more questions.
And since we are observing different percentages of gaps (mine is 5%, yours is 
26%), i think maybe you should look into it on the FreeBSD camp.

Please drop the link to the relevant thread there, if you decide to do so.

I would like to follow this.

Thanx!



Frank

Am 2012-11-07 09:26, schrieb Achilleas Mantzios:

Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski  wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the output of 
top.  There are no memory leaks, and FreeBSD doesn't "lose" any memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT






-
Achilleas Mantzios
IT DEPT




--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Achilleas Mantzios
Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski  wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the 
output of top.  There are no memory leaks, and FreeBSD doesn't "lose" any 
memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT

Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Achilleas Mantzios
On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:
> Hey, this is really cool. I directly tried the script and there's a line 
> from the output that caught my eye:
> 
>  > mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN
> 
> is this the shared buffers? I guess so, but I want to confirm my guess ...

Hmm, that would be ideal, (from an understanding perspective) but at least in 
my system (FreeBSD-8.3), no. 

psql -q -t -c "show shared_buffers" | grep -v -e '^$' | awk '{print $1}'
3840MB

SYSTEM MEMORY INFORMATION:
mem_gap_vm:  +996843520 (950MB) [  5%] Memory gap: UNKNOWN

$mem_gap_vm  = $mem_all - ($mem_wire + $mem_active + $mem_inactive + $mem_cache 
+ $mem_free);

mem_all is some rounded and more rationalized version less than hw.physmem : 
$mem_all = $sysctl->{"vm.stats.vm.v_page_count"} * $sysctl->{"hw.pagesize"};

Anyway, this is not so postgresql related at the moment. The correct thing to 
do (since you run production servers on FreeBSD) is to post to the relevant 
FreeBSD list and/or forum. freebsd-questi...@freebsd.org and 
freebsd-sta...@freebsd.org would be a good start. 
Also the forums : http://forums.freebsd.org/forumdisplay.php?f=3
Only after gathering substantial info from there, would it make sense to come 
back here and maybe ask more questions.
And since we are observing different percentages of gaps (mine is 5%, yours is 
26%), i think maybe you should look into it on the FreeBSD camp.

Please drop the link to the relevant thread there, if you decide to do so.

I would like to follow this.

Thanx!

> 
> Frank
> 
> Am 2012-11-07 09:26, schrieb Achilleas Mantzios:
> > Vick,
> > fantastic script, thanx! FreeBSD sysctl system is awesome!
> >
> > On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:
> >
> >
> >
> > On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski  wrote:
> >
> > and this is after a few hours of running:
> >
> > Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
> > Swap: 4096M Total, 828K Used, 4095M Free
> >
> >
> >
> >
> > For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
> > Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The 
> > data + indexes are about 240GB on disk.  This server only runs postgres 
> > aside from the basic system processes.
> >
> >
> > Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
> > Swap: 4096M Total, 272K Used, 4096M Free
> >
> >
> > I agree with the conclusion that the shared memory segments are confusing 
> > the output of top.  There are no memory leaks, and FreeBSD doesn't "lose" 
> > any memory.
> >
> >
> > There are some scripts floating around that read values from sysctl 
> > vm.stats.vm and format them nicely to tell you how much memory is used up 
> > and free.  Try the one referenced here: 
> > http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/
> >
> >
> >
> >
> >
> > -
> > Achilleas Mantzios
> > IT DEPT
> >
> 
> 
> 
-
Achilleas Mantzios
IT DEPT

Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Frank Broniewski
Hey, this is really cool. I directly tried the script and there's a line 
from the output that caught my eye:


> mem_gap_vm:  +   8812892160 (   8404MB) [ 26%] Memory gap: UNKNOWN

is this the shared buffers? I guess so, but I want to confirm my guess ...

Frank

Am 2012-11-07 09:26, schrieb Achilleas Mantzios:

Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski  wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the output of 
top.  There are no memory leaks, and FreeBSD doesn't "lose" any memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT




--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-07 Thread Achilleas Mantzios
Vick,
fantastic script, thanx! FreeBSD sysctl system is awesome!

On Τρι 06 Νοε 2012 14:33:43 Vick Khera wrote:



On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski  wrote:

and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free




For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with 
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The data 
+ indexes are about 240GB on disk.  This server only runs postgres aside from 
the basic system processes.


Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free


I agree with the conclusion that the shared memory segments are confusing the 
output of top.  There are no memory leaks, and FreeBSD doesn't "lose" any 
memory.


There are some scripts floating around that read values from sysctl vm.stats.vm 
and format them nicely to tell you how much memory is used up and free.  Try 
the one referenced here: 
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/





-
Achilleas Mantzios
IT DEPT

Re: [GENERAL] Memory issue on FreeBSD

2012-11-06 Thread Vick Khera
On Mon, Nov 5, 2012 at 10:11 AM, Frank Broniewski  wrote:

> and this is after a few hours of running:
>
> Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
> Swap: 4096M Total, 828K Used, 4095M Free
>
>
For comparison, here is the output of a 32GB FreeBSD 9.0/amd64 server, with
Postgres 9.0.7 running since June 10, and is heavily pounded on 24x7.  The
data + indexes are about 240GB on disk.  This server only runs postgres
aside from the basic system processes.

Mem: 231M Active, 21G Inact, 3777M Wired, 1009M Cache, 3285M Buf, 191M Free
Swap: 4096M Total, 272K Used, 4096M Free

I agree with the conclusion that the shared memory segments are confusing
the output of top.  There are no memory leaks, and FreeBSD doesn't "lose"
any memory.

There are some scripts floating around that read values from sysctl
vm.stats.vm and format them nicely to tell you how much memory is used up
and free.  Try the one referenced here:
http://www.cyberciti.biz/faq/freebsd-command-to-get-ram-information/


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Achilleas Mantzios
Since the top reporting goes back to normal when postgresql is stopped ,
and since postgresql is special due to the use of IPC, i would be inclined 
to think that the culprit here is the shared memory.

I don't know where maintenance_work_mem really lives (process normal address 
space or IPC shared mem)
and if that makes any difference. If it is possible you might play with those 
two values and see if anything changes.

Currently i have :

maintenance_work_mem = 960MB # pgtune wizard 2012-11-01
shared_buffers = 3840MB # pgtune wizard 2012-11-01

top:
last pid: 74896;  load averages:  0.02,  0.08,  0.08
  up 4+06:20:31  18:14:19
187 processes: 1 running, 172 sleeping, 14 zombie
CPU: % user, % nice, % system, % interrupt, % idle
Mem: 4064M Active, 8111M Inact, 2014M Wired, 322M Cache, 1645M Buf, 1106M Free
Swap: 8000M Total, 608K Used, 7999M Free

hw.physmem: 17144205312
hw.usermem: 15028662272
hw.realmem: 17985175552

top (excluding Buf) amounts to 15617 Megs while physmem shows as 16349 Megs

but as i said i run 8.3 on AMD64 and pgsql 9.2.1

On Δευ 05 Νοε 2012 16:11:39 Frank Broniewski wrote:
> Hi,
> 
> I just add the different memory values together (minus the buffers). 
> Usually this sums up (+/-) to the installed memory size, at least on my 
> other machines. I found a thread similar to my problem here [1], but no 
> solution. I don't mind top showing false values, but if there's a larger 
> problem behind this, then I really want to solve it.
> 
> Top is really just an indicator for this issue, it's also visible in my 
> munin stats [2]
> 
> Below is a output _without_ postgresql running:
> Mem: 59M Active, 17G Inact, 3953M Wired, 1325M Cache, 3283M Buf, 8663M Free
> Swap: 4096M Total, 828K Used, 4095M Free
> 
> 
> and this is after a few hours of running:
> 
> Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
> Swap: 4096M Total, 828K Used, 4095M Free
> 
> some memory related sysctl values:
> hw.realmem: 34879832064
> hw.physmem: 34322804736
> hw.usermem: 30161108992
> 
> # sysctl vm.vmtotal
> vm.vmtotal:
> System wide totals computed every five seconds: (values in kilobytes)
> ===
> Processes:(RUNQ: 1 Disk Wait: 0 Page Wait: 0 Sleep: 70)
> Virtual Memory:   (Total: 1084659688K Active: 10400940K)
> Real Memory:  (Total: 1616176K Active: 1349052K)
> Shared Virtual Memory:(Total: 60840K Active: 14132K)
> Shared Real Memory:   (Total: 11644K Active: 8388K)
> Free Memory Pages:7263972K
> 
> 
> [1] 
> http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/061247.html
> [2] 
> http://www.gis-hosting.lu/monitor/munin/metrico/bilbo.metrico/memory.html
> 
> 
> Am 2012-11-05 15:21, schrieb Achilleas Mantzios:
> > How do you measure that smth is missing from top? What values do you add?
> > I am currently running 8.3 but we shouldn't be so far apart top-wise.
> > What is the reading under SIZE and RES in top for all postgresql processes?
> > Take note that shared mem should be recorded for each and every postmaster 
> > running.
> >
> > On Δευ 05 Νοε 2012 14:36:44 Frank Broniewski wrote:
> >> Hi,
> >>
> >> thank you for your feedback. I had a look at those commands and their
> >> output, especially in conjunction with the SEGSZ value from icps  -am
> >>
> >> Here's an example output:
> >> # ipcs -am
> >> Shared Memory:
> >> T   ID  KEY MODEOWNERGROUPCREATOR
> >> CGROUP NATTCHSEGSZ CPID LPID ATIME
> >> DTIMECTIME
> >> m   262144  5432001 --rw--- pgsqlpgsqlpgsqlpgsql
> >>12   88139939844551245512 13:49:28
> >> 14:31:29 13:49:28
> >>
> >> but frankly this tells me nothing. I can tell that the value SEGSZ is
> >> right from the start 8813993984 and it doesn't change anymore. The only
> >> value that changes is the NATTCH value, I observed a range from 8 to 36
> >> there. I agree that the SEGSZ value matches the 8GB shared buffer, but
> >> how can I make the connection of my 5GB missing in top? I wonder if this
> >> might be the maintenance_work_mem, which is set to 4GB?
> >>
> >> Many thanks,
> >>
> >> Frank
> >>
> >> Am 2012-11-05 12:14, schrieb Achilleas Mantzios:
> >>>
> >>> ipcs in FreeBSD is a little ... tricky.
> >>>
> >>> ipcs -M
> >>> ipcs -m
> >>> ipcs -am
> >>>
> >>> could be your friends
> >>>
> >>> On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
>  Hi,
> 
>  I am running a PostgreSQL server on FreeBSD. The system has 32GB memory.
>  Usually I use top to examine the memory usage of the system. After a
>  while, a part, approximately 5GB, vanish from top, so that the memory
>  rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again
>  available, but then it's already slightly decreasing. It's a standalone
>  database server. It has an Ope

Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Frank Broniewski

Hi,

I just add the different memory values together (minus the buffers). 
Usually this sums up (+/-) to the installed memory size, at least on my 
other machines. I found a thread similar to my problem here [1], but no 
solution. I don't mind top showing false values, but if there's a larger 
problem behind this, then I really want to solve it.


Top is really just an indicator for this issue, it's also visible in my 
munin stats [2]


Below is a output _without_ postgresql running:
Mem: 59M Active, 17G Inact, 3953M Wired, 1325M Cache, 3283M Buf, 8663M Free
Swap: 4096M Total, 828K Used, 4095M Free


and this is after a few hours of running:

Mem: 91M Active, 17G Inact, 3983M Wired, 1526M Cache, 3283M Buf, 155M Free
Swap: 4096M Total, 828K Used, 4095M Free

some memory related sysctl values:
hw.realmem: 34879832064
hw.physmem: 34322804736
hw.usermem: 30161108992

# sysctl vm.vmtotal
vm.vmtotal:
System wide totals computed every five seconds: (values in kilobytes)
===
Processes:  (RUNQ: 1 Disk Wait: 0 Page Wait: 0 Sleep: 70)
Virtual Memory: (Total: 1084659688K Active: 10400940K)
Real Memory:(Total: 1616176K Active: 1349052K)
Shared Virtual Memory:  (Total: 60840K Active: 14132K)
Shared Real Memory: (Total: 11644K Active: 8388K)
Free Memory Pages:  7263972K


[1] 
http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/061247.html
[2] 
http://www.gis-hosting.lu/monitor/munin/metrico/bilbo.metrico/memory.html



Am 2012-11-05 15:21, schrieb Achilleas Mantzios:

How do you measure that smth is missing from top? What values do you add?
I am currently running 8.3 but we shouldn't be so far apart top-wise.
What is the reading under SIZE and RES in top for all postgresql processes?
Take note that shared mem should be recorded for each and every postmaster 
running.

On Δευ 05 Νοε 2012 14:36:44 Frank Broniewski wrote:

Hi,

thank you for your feedback. I had a look at those commands and their
output, especially in conjunction with the SEGSZ value from icps  -am

Here's an example output:
# ipcs -am
Shared Memory:
T   ID  KEY MODEOWNERGROUPCREATOR
CGROUP NATTCHSEGSZ CPID LPID ATIME
DTIMECTIME
m   262144  5432001 --rw--- pgsqlpgsqlpgsqlpgsql
   12   88139939844551245512 13:49:28
14:31:29 13:49:28

but frankly this tells me nothing. I can tell that the value SEGSZ is
right from the start 8813993984 and it doesn't change anymore. The only
value that changes is the NATTCH value, I observed a range from 8 to 36
there. I agree that the SEGSZ value matches the 8GB shared buffer, but
how can I make the connection of my 5GB missing in top? I wonder if this
might be the maintenance_work_mem, which is set to 4GB?

Many thanks,

Frank

Am 2012-11-05 12:14, schrieb Achilleas Mantzios:


ipcs in FreeBSD is a little ... tricky.

ipcs -M
ipcs -m
ipcs -am

could be your friends

On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:

Hi,

I am running a PostgreSQL server on FreeBSD. The system has 32GB memory.
Usually I use top to examine the memory usage of the system. After a
while, a part, approximately 5GB, vanish from top, so that the memory
rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again
available, but then it's already slightly decreasing. It's a standalone
database server. It has an OpenStreetMap world database running with
353GB data (with indices).

Some system information:
# uname -r
9.0-RELEASE-p3
# pg_ctl --version
pg_ctl (PostgreSQL) 9.1.6

# cat /boot/loader.conf
...
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
kern.ipc.semumr=200
vm.pmap.shpgperproc=400
vm.pmap.pv_entry_max=50331648
...

# cat /pgdata/data/postgresql.conf
...
default_statistics_target = 50 # pgtune wizard 2012-04-04
maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
constraint_exclusion = on # pgtune wizard 2012-04-04
checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
effective_cache_size = 24GB # pgtune wizard 2012-04-04
work_mem = 768MB # pgtune wizard 2012-04-04
wal_buffers = 16MB # pgtune wizard 2012-04-04
checkpoint_segments = 60 # 20
shared_buffers = 8GB # pgtune wizard 2012-04-04
max_connections = 100
synchronous_commit = off


So any help finding out why my system "looses" some RAM is greatly
appreciated :-) If more information is needed I will gladly provide it.

Frank





-
Achilleas Mantzios
IT DEPT







-
Achilleas Mantzios
IT DEPT





--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Achilleas Mantzios
How do you measure that smth is missing from top? What values do you add?
I am currently running 8.3 but we shouldn't be so far apart top-wise.
What is the reading under SIZE and RES in top for all postgresql processes?
Take note that shared mem should be recorded for each and every postmaster 
running.

On Δευ 05 Νοε 2012 14:36:44 Frank Broniewski wrote:
> Hi,
> 
> thank you for your feedback. I had a look at those commands and their 
> output, especially in conjunction with the SEGSZ value from icps  -am
> 
> Here's an example output:
> # ipcs -am
> Shared Memory:
> T   ID  KEY MODEOWNERGROUPCREATOR 
> CGROUP NATTCHSEGSZ CPID LPID ATIME 
> DTIMECTIME
> m   262144  5432001 --rw--- pgsqlpgsqlpgsqlpgsql 
>   12   88139939844551245512 13:49:28 
> 14:31:29 13:49:28
> 
> but frankly this tells me nothing. I can tell that the value SEGSZ is 
> right from the start 8813993984 and it doesn't change anymore. The only 
> value that changes is the NATTCH value, I observed a range from 8 to 36 
> there. I agree that the SEGSZ value matches the 8GB shared buffer, but 
> how can I make the connection of my 5GB missing in top? I wonder if this 
> might be the maintenance_work_mem, which is set to 4GB?
> 
> Many thanks,
> 
> Frank
> 
> Am 2012-11-05 12:14, schrieb Achilleas Mantzios:
> >
> > ipcs in FreeBSD is a little ... tricky.
> >
> > ipcs -M
> > ipcs -m
> > ipcs -am
> >
> > could be your friends
> >
> > On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
> >> Hi,
> >>
> >> I am running a PostgreSQL server on FreeBSD. The system has 32GB memory.
> >> Usually I use top to examine the memory usage of the system. After a
> >> while, a part, approximately 5GB, vanish from top, so that the memory
> >> rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again
> >> available, but then it's already slightly decreasing. It's a standalone
> >> database server. It has an OpenStreetMap world database running with
> >> 353GB data (with indices).
> >>
> >> Some system information:
> >> # uname -r
> >> 9.0-RELEASE-p3
> >> # pg_ctl --version
> >> pg_ctl (PostgreSQL) 9.1.6
> >>
> >> # cat /boot/loader.conf
> >> ...
> >> kern.ipc.semmni=256
> >> kern.ipc.semmns=512
> >> kern.ipc.semmnu=256
> >> kern.ipc.semumr=200
> >> vm.pmap.shpgperproc=400
> >> vm.pmap.pv_entry_max=50331648
> >> ...
> >>
> >> # cat /pgdata/data/postgresql.conf
> >> ...
> >> default_statistics_target = 50 # pgtune wizard 2012-04-04
> >> maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
> >> constraint_exclusion = on # pgtune wizard 2012-04-04
> >> checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
> >> effective_cache_size = 24GB # pgtune wizard 2012-04-04
> >> work_mem = 768MB # pgtune wizard 2012-04-04
> >> wal_buffers = 16MB # pgtune wizard 2012-04-04
> >> checkpoint_segments = 60 # 20
> >> shared_buffers = 8GB # pgtune wizard 2012-04-04
> >> max_connections = 100
> >> synchronous_commit = off
> >>
> >>
> >> So any help finding out why my system "looses" some RAM is greatly
> >> appreciated :-) If more information is needed I will gladly provide it.
> >>
> >> Frank
> >>
> >>
> >>
> >>
> > -
> > Achilleas Mantzios
> > IT DEPT
> >
> >
> 
> 
> 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Frank Broniewski

Hi,

thank you for your feedback. I had a look at those commands and their 
output, especially in conjunction with the SEGSZ value from icps  -am


Here's an example output:
# ipcs -am
Shared Memory:
T   ID  KEY MODEOWNERGROUPCREATOR 
CGROUP NATTCHSEGSZ CPID LPID ATIME 
DTIMECTIME
m   262144  5432001 --rw--- pgsqlpgsqlpgsqlpgsql 
 12   88139939844551245512 13:49:28 
14:31:29 13:49:28


but frankly this tells me nothing. I can tell that the value SEGSZ is 
right from the start 8813993984 and it doesn't change anymore. The only 
value that changes is the NATTCH value, I observed a range from 8 to 36 
there. I agree that the SEGSZ value matches the 8GB shared buffer, but 
how can I make the connection of my 5GB missing in top? I wonder if this 
might be the maintenance_work_mem, which is set to 4GB?


Many thanks,

Frank

Am 2012-11-05 12:14, schrieb Achilleas Mantzios:


ipcs in FreeBSD is a little ... tricky.

ipcs -M
ipcs -m
ipcs -am

could be your friends

On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:

Hi,

I am running a PostgreSQL server on FreeBSD. The system has 32GB memory.
Usually I use top to examine the memory usage of the system. After a
while, a part, approximately 5GB, vanish from top, so that the memory
rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again
available, but then it's already slightly decreasing. It's a standalone
database server. It has an OpenStreetMap world database running with
353GB data (with indices).

Some system information:
# uname -r
9.0-RELEASE-p3
# pg_ctl --version
pg_ctl (PostgreSQL) 9.1.6

# cat /boot/loader.conf
...
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
kern.ipc.semumr=200
vm.pmap.shpgperproc=400
vm.pmap.pv_entry_max=50331648
...

# cat /pgdata/data/postgresql.conf
...
default_statistics_target = 50 # pgtune wizard 2012-04-04
maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
constraint_exclusion = on # pgtune wizard 2012-04-04
checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
effective_cache_size = 24GB # pgtune wizard 2012-04-04
work_mem = 768MB # pgtune wizard 2012-04-04
wal_buffers = 16MB # pgtune wizard 2012-04-04
checkpoint_segments = 60 # 20
shared_buffers = 8GB # pgtune wizard 2012-04-04
max_connections = 100
synchronous_commit = off


So any help finding out why my system "looses" some RAM is greatly
appreciated :-) If more information is needed I will gladly provide it.

Frank





-
Achilleas Mantzios
IT DEPT





--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Achilleas Mantzios
(scrap my previous internal email (hence fake) address this one is correct : 
sorry for that)
You can stop pgsql, start it and then watch out for the increase in SEGSZ 
values. I pretty much think they are in bytes. 
I am pretty confident that this value depicts the shared_buffers size in bytes.


On Δευ 05 Νοε 2012 13:14:37 Achilleas Mantzios wrote:
> 
> ipcs in FreeBSD is a little ... tricky.
> 
> ipcs -M
> ipcs -m
> ipcs -am
> 
> could be your friends
> 
> On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
> > Hi,
> > 
> > I am running a PostgreSQL server on FreeBSD. The system has 32GB memory. 
> > Usually I use top to examine the memory usage of the system. After a 
> > while, a part, approximately 5GB, vanish from top, so that the memory 
> > rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again 
> > available, but then it's already slightly decreasing. It's a standalone 
> > database server. It has an OpenStreetMap world database running with 
> > 353GB data (with indices).
> > 
> > Some system information:
> > # uname -r
> > 9.0-RELEASE-p3
> > # pg_ctl --version
> > pg_ctl (PostgreSQL) 9.1.6
> > 
> > # cat /boot/loader.conf
> > ...
> > kern.ipc.semmni=256
> > kern.ipc.semmns=512
> > kern.ipc.semmnu=256
> > kern.ipc.semumr=200
> > vm.pmap.shpgperproc=400
> > vm.pmap.pv_entry_max=50331648
> > ...
> > 
> > # cat /pgdata/data/postgresql.conf
> > ...
> > default_statistics_target = 50 # pgtune wizard 2012-04-04
> > maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
> > constraint_exclusion = on # pgtune wizard 2012-04-04
> > checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
> > effective_cache_size = 24GB # pgtune wizard 2012-04-04
> > work_mem = 768MB # pgtune wizard 2012-04-04
> > wal_buffers = 16MB # pgtune wizard 2012-04-04
> > checkpoint_segments = 60 # 20
> > shared_buffers = 8GB # pgtune wizard 2012-04-04
> > max_connections = 100
> > synchronous_commit = off
> > 
> > 
> > So any help finding out why my system "looses" some RAM is greatly 
> > appreciated :-) If more information is needed I will gladly provide it.
> > 
> > Frank
> > 
> > 
> > 
> > 
> -
> Achilleas Mantzios
> IT DEPT
> 
> 
> 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Achilleas Mantzios

ipcs in FreeBSD is a little ... tricky.

ipcs -M
ipcs -m
ipcs -am

could be your friends

On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
> Hi,
> 
> I am running a PostgreSQL server on FreeBSD. The system has 32GB memory. 
> Usually I use top to examine the memory usage of the system. After a 
> while, a part, approximately 5GB, vanish from top, so that the memory 
> rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again 
> available, but then it's already slightly decreasing. It's a standalone 
> database server. It has an OpenStreetMap world database running with 
> 353GB data (with indices).
> 
> Some system information:
> # uname -r
> 9.0-RELEASE-p3
> # pg_ctl --version
> pg_ctl (PostgreSQL) 9.1.6
> 
> # cat /boot/loader.conf
> ...
> kern.ipc.semmni=256
> kern.ipc.semmns=512
> kern.ipc.semmnu=256
> kern.ipc.semumr=200
> vm.pmap.shpgperproc=400
> vm.pmap.pv_entry_max=50331648
> ...
> 
> # cat /pgdata/data/postgresql.conf
> ...
> default_statistics_target = 50 # pgtune wizard 2012-04-04
> maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
> constraint_exclusion = on # pgtune wizard 2012-04-04
> checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
> effective_cache_size = 24GB # pgtune wizard 2012-04-04
> work_mem = 768MB # pgtune wizard 2012-04-04
> wal_buffers = 16MB # pgtune wizard 2012-04-04
> checkpoint_segments = 60 # 20
> shared_buffers = 8GB # pgtune wizard 2012-04-04
> max_connections = 100
> synchronous_commit = off
> 
> 
> So any help finding out why my system "looses" some RAM is greatly 
> appreciated :-) If more information is needed I will gladly provide it.
> 
> Frank
> 
> 
> 
> 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory issue on FreeBSD

2012-11-05 Thread Frank Broniewski

Hi,

I am running a PostgreSQL server on FreeBSD. The system has 32GB memory. 
Usually I use top to examine the memory usage of the system. After a 
while, a part, approximately 5GB, vanish from top, so that the memory 
rounds up to 27GB.  After restarting PostgreSQL, I have all 32GB again 
available, but then it's already slightly decreasing. It's a standalone 
database server. It has an OpenStreetMap world database running with 
353GB data (with indices).


Some system information:
# uname -r
9.0-RELEASE-p3
# pg_ctl --version
pg_ctl (PostgreSQL) 9.1.6

# cat /boot/loader.conf
...
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
kern.ipc.semumr=200
vm.pmap.shpgperproc=400
vm.pmap.pv_entry_max=50331648
...

# cat /pgdata/data/postgresql.conf
...
default_statistics_target = 50 # pgtune wizard 2012-04-04
maintenance_work_mem = 4GB # pgtune wizard 2012-04-04
constraint_exclusion = on # pgtune wizard 2012-04-04
checkpoint_completion_target = 0.9 # pgtune wizard 2012-04-04
effective_cache_size = 24GB # pgtune wizard 2012-04-04
work_mem = 768MB # pgtune wizard 2012-04-04
wal_buffers = 16MB # pgtune wizard 2012-04-04
checkpoint_segments = 60 # 20
shared_buffers = 8GB # pgtune wizard 2012-04-04
max_connections = 100
synchronous_commit = off


So any help finding out why my system "looses" some RAM is greatly 
appreciated :-) If more information is needed I will gladly provide it.


Frank



--
Frank BRONIEWSKI

METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN

tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Issue

2011-11-04 Thread Ioana Danes
Hi Scott,

I followed your advise and I run the test with the changes suggested at 
points 1,2 and 3 below and my performance test run for 18 hours without
 swapping. I did have a 40% drop in performance but I think that is a different 
problem. I will run more tests and post the results if anyone is interested.

BTW, I did read Postgresql 9.0 High Performance written by Greg Smith yesterday 
and he does mention about these parameters. I should have done that earlier, it 
is a great book.

Thank you for your help,
Ioana




- Original Message -
From: Scott Marlowe 
To: Ioana Danes 
Cc: PostgreSQL General 
Sent: Thursday, November 3, 2011 10:30:27 AM
Subject: Re: [GENERAL] Memory Issue

On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes  wrote:

> After another half an hour almost the entire swap is used and the system 
> performs really bad 100 TPS or lower.
> It never runs out of memory though!
>
> I would like to ask for your opinion on this issue.
> My concerns are why the memory is not reused earlier and it is using the 
> swapping when the system does only these 2 inserts.
> Is this an OS issue, postgres issue, configuration issue?
> Your advice is greatly appreciated.

You can try a few things.

1: lower your shared_buffers.  It's unlikely you really need 4G
 for
them.  A few hundred megs is probably plenty for the type of work
you're doing.  Let the kernel cache the data you're not hitting right
this second.

2: Set swappiness to 0.    I.e. edit /etc/sysctl.conf and add a line
like vm.swappiness = 0 then run sudo sysctl
 -p

3: Turn off overcommit.  Same as number 2, set vm.overcommit_memory =
2 which will turn off the ability of linux to overcommit memory and
should then turn off the OOM killer.

4: just turn off swap. With only 16Gigs this is a tad dangerous,
especially if you haven't turned off the OOM in step 3.  Memory is
cheap, throw 32G at least into the machine.  With 1200 users, you
really need plenty of memory.  To turn off swap add something like
/sbin/swapoff -a to the /etc/rc.local file (before the exit line
natch)

Re: [GENERAL] Memory Issue

2011-11-03 Thread Ioana Danes


- Original Message -
From: Scott Marlowe 
To: Ioana Danes 
Cc: PostgreSQL General 
Sent: Thursday, November 3, 2011 10:30:27 AM
Subject: Re: [GENERAL] Memory Issue

On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes  wrote:

> After another half an hour almost the entire swap is used and the system 
> performs really bad 100 TPS or lower.
> It never runs out of memory though!
>
> I would like to ask for your opinion on this issue.
> My concerns are why the memory is not reused earlier and it is using the 
> swapping when the system does only these 2 inserts.
> Is this an OS issue, postgres issue, configuration issue?
> Your advice is greatly appreciated.

You can try a few things.

1: lower your shared_buffers.  It's unlikely you really need 4G for
them.  A few hundred megs is probably plenty for the type of work
you're doing.  Let the kernel cache the data you're not hitting right
this second.

2: Set swappiness to 0.    I.e. edit /etc/sysctl.conf and add a line
like vm.swappiness = 0 then run sudo sysctl -p

3: Turn off overcommit.  Same as number 2, set vm.overcommit_memory =
2 which will turn off the ability of linux to overcommit memory and
should then turn off the OOM killer.

4: just turn off swap. With only 16Gigs this is a tad dangerous,
especially if you haven't turned off the OOM in step 3.  Memory is
cheap, throw 32G at least into the machine.  With 1200 users, you
really need plenty of memory.  To turn off swap add something like
/sbin/swapoff -a to the /etc/rc.local file (before the exit line
natch)


Thank you for your quick replay Scott, I will try right now and let you know...
Ioana


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Issue

2011-11-03 Thread Scott Marlowe
On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes  wrote:
> Hello Everyone,
>
> I have a performance test running with 1200 clients performing this 
> transaction every second:
>
>
> begin transaction
> select nextval('sequence1');
> select nextval('sequence2');
> insert into table1;
> insert into table2;
> commit;
>
> Table1 and table2 have no foreign keys and no triggers. There are 13 indexes 
> on table1 and 5 indexes on table2.
>
> We do use connection pooling but because the clients commit this transaction 
> every second I basically have 1200 connections all the time.
>
> The db server is dedicated running on
>
> SUSE Linux Enterprise Server 11 (x86_64)
> VERSION = 11
> PATCHLEVEL = 1
>
> Postgres 9.0.3 (Same behaviour on Postgres 9.1.1):
>
>
> The server has 16GB of RAM and the postgres parameters are:
>
> shared_buffers = 4GB
> work_mem = 1MB
> maintenance_work_mem = 2GB
> effective_cache_size = 8GB
>
> wal_level = minimal
> wal_buffers = 1MB
>
> checkpoint_segments =16
> checkpoint_warning = 30s
> archive_mode = off
>
> autovacuum = off
>
> kernel.shmmax=5368709120 (5GB)
> kernel.shmall=5368709120 (5GB)
>
> The test performs well for about an hour with 1150 TPS and then the TPS goes 
> down really bad and the clients timeout...
> I watched the memory usage and the slowness is caused by swapping:
>
> vmstat
>
> procs ---memory-- ---swap-- -io -system--
>      -cpu--
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs
>      us sy id wa st
> 33  1 1149892  77484    436 3289824 1012 2700  1016  5844 12285
>      35318 43 32  1  0 25
> 28  1 1150348  73996    432 3291632  908 1300   908  4684 13668
>      29100 43 31  1  0 24
>  8  1 1151704  75212    440 3292756 1048 2260  1056 10300 13844
>      18628 39 33  6  1 22
>  8  1 1152716  75364    428 3294028 1640 1932  1640  6780 15325
>      17785 38 34  5  1 22
> 11  1 1154024  94356    444 3278828 1260 2328  1276  6752 15171
>      15538 40 30  7  1 22
>  1  0 1154876  98156    480 3281456 1572 1844  1596  8260 14690
>      14451 32 32 13  2 19
>  0  0 1154892 100588    492 3281636   56  108    68   932 2744
>      2082  2  8 88  1  1
>
>
> free
> 
>  total   used   free shared    buffers cached
> Mem:  16790144   16710092  80052  0   1724
>      3337172
> -/+ buffers/cache:   13371196    3418948
> Swap:  4194296    1162980    3031316
>
>
> top:
> 
> top - 12:12:00 up  1:54,  6 users,  load average: 37.57, 41.52, 31.24
> Tasks: 1309 total,  42 running, 1267 sleeping,   0 stopped,   0
>      zombie
> Cpu(s): 29.4%us, 13.8%sy,  0.0%ni, 23.8%id, 12.8%wa,  0.0%hi,
>      5.3%si, 14.8%st
> Mem: 16396M total,    16310M used,   85M free,    2M
>      buffers
> Swap: 4095M total, 1187M used, 2908M free, 3213M
>      cached
>
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
>      COMMAND
>    39 root  20   0 0    0    0 S    4  0.0   3:42.18
>      kswapd0
>  2282 postgres  20   0 57628  608  332 R    3  0.0   2:18.42
>      postmaster
>  9722 postgres  20   0 4308m 220m 215m S    2  1.3   0:03.01
>      postmaster
> 10483 postgres  20   0 4303m 164m 160m S    2  1.0   0:02.02
>      postmaster
> 10520 postgres  20   0 4303m 158m 155m R    2  1.0   0:02.24
>      postmaster
>  9005 postgres  20   0 4308m 298m 294m S    2  1.8   0:04.52
>      postmaster
>
>
> After another half an hour almost the entire swap is used and the system 
> performs really bad 100 TPS or lower.
> It never runs out of memory though!
>
> I would like to ask for your opinion on this issue.
> My concerns are why the memory is not reused earlier and it is using the 
> swapping when the system does only these 2 inserts.
> Is this an OS issue, postgres issue, configuration issue?
> Your advice is greatly appreciated.

You can try a few things.

1: lower your shared_buffers.  It's unlikely you really need 4G for
them.  A few hundred megs is probably plenty for the type of work
you're doing.  Let the kernel cache the data you're not hitting right
this second.

2: Set swappiness to 0.I.e. edit /etc/sysctl.conf and add a line
like vm.swappiness = 0 then run sudo sysctl -p

3: Turn off overcommit.  Same as number 2, set vm.overcommit_memory =
2 which will turn off the ability of linux to overcommit memory and
should then turn off the OOM killer.

4: just turn off swap. With only 16Gigs this is a tad dangerous,
especially if you haven't turned off the OOM in step 3.  Memory is
cheap, throw 32G at least into the machine.  With 1200 users, you
really need plenty of memory.  To turn off swap add something like
/sbin/swapoff -a to the /etc/rc.local file (before the exit line
natch)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory Issue

2011-11-03 Thread Ioana Danes
Hello Everyone,

I have a performance test running with 1200 clients performing this transaction 
every second:


begin transaction
select nextval('sequence1');
select nextval('sequence2');
insert into table1;
insert into table2;
commit;

Table1 and table2 have no foreign keys and no triggers. There are 13 indexes on 
table1 and 5 indexes on table2.

We do use connection pooling but because the clients commit this transaction 
every second I basically have 1200 connections all the time.

The db server is dedicated running on 

SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 1

Postgres 9.0.3 (Same behaviour on Postgres 9.1.1): 


The server has 16GB of RAM and the postgres parameters are:

shared_buffers = 4GB
work_mem = 1MB
maintenance_work_mem = 2GB
effective_cache_size = 8GB

wal_level = minimal
wal_buffers = 1MB 

checkpoint_segments =16
checkpoint_warning = 30s
archive_mode = off

autovacuum = off

kernel.shmmax=5368709120 (5GB)
kernel.shmall=5368709120 (5GB)

The test performs well for about an hour with 1150 TPS and then the TPS goes 
down really bad and the clients timeout...
I watched the memory usage and the slowness is caused by swapping:

vmstat 

procs ---memory-- ---swap-- -io -system--
  -cpu--
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs
  us sy id wa st
33  1 1149892  77484    436 3289824 1012 2700  1016  5844 12285
  35318 43 32  1  0 25
28  1 1150348  73996    432 3291632  908 1300   908  4684 13668
  29100 43 31  1  0 24
 8  1 1151704  75212    440 3292756 1048 2260  1056 10300 13844
  18628 39 33  6  1 22
 8  1 1152716  75364    428 3294028 1640 1932  1640  6780 15325
  17785 38 34  5  1 22
11  1 1154024  94356    444 3278828 1260 2328  1276  6752 15171
  15538 40 30  7  1 22
 1  0 1154876  98156    480 3281456 1572 1844  1596  8260 14690
  14451 32 32 13  2 19
 0  0 1154892 100588    492 3281636   56  108    68   932 2744
  2082  2  8 88  1  1


free

 total   used   free shared    buffers cached
Mem:  16790144   16710092  80052  0   1724   
  3337172
-/+ buffers/cache:   13371196    3418948
Swap:  4194296    1162980    3031316


top:

top - 12:12:00 up  1:54,  6 users,  load average: 37.57, 41.52, 31.24
Tasks: 1309 total,  42 running, 1267 sleeping,   0 stopped,   0
  zombie
Cpu(s): 29.4%us, 13.8%sy,  0.0%ni, 23.8%id, 12.8%wa,  0.0%hi, 
  5.3%si, 14.8%st
Mem: 16396M total,    16310M used,   85M free,    2M
  buffers
Swap: 4095M total, 1187M used, 2908M free, 3213M
  cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+ 
  COMMAND
   39 root  20   0 0    0    0 S    4  0.0   3:42.18
  kswapd0
 2282 postgres  20   0 57628  608  332 R    3  0.0   2:18.42
  postmaster
 9722 postgres  20   0 4308m 220m 215m S    2  1.3   0:03.01
  postmaster
10483 postgres  20   0 4303m 164m 160m S    2  1.0   0:02.02
  postmaster
10520 postgres  20   0 4303m 158m 155m R    2  1.0   0:02.24
  postmaster
 9005 postgres  20   0 4308m 298m 294m S    2  1.8   0:04.52
  postmaster
 

After another half an hour almost the entire swap is used and the system 
performs really bad 100 TPS or lower. 
It never runs out of memory though!

I would like to ask for your opinion on this issue.
My concerns are why the memory is not reused earlier and it is using the 
swapping when the system does only these 2 inserts. 
Is this an OS issue, postgres issue, configuration issue? 
Your advice is greatly appreciated.

Thank you for your time and help,
Ioana

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general