Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-23 Thread Tomeh, Husam
, 2006 3:49 PM
To: Tomeh, Husam
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and
Create Index

Tomeh, Husam [EMAIL PROTECTED] writes:
 mtrac=# show maintenance_work_mem ;
  maintenance_work_mem
 --
  1048576==
 (1 row)

 mtrac=#
 mtrac=#
 mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);
 ERROR:  out of memory
===
 DETAIL:  Failed on request of size 134217728.   ===

It would be useful to look at the detailed allocation info that this
(should have) put into the postmaster log.  Also, if you could get
a stack trace back from the error, that would be even more useful.
To do that,
* start psql
* determine PID of connected backend (use pg_backend_pid())
* in another window, as postgres user,
gdb /path/to/postgres backend-PID
gdb break errfinish
gdb cont
* issue failing command in psql
* when breakpoint is reached,
gdb bt
... stack trace printed here ...
gdb q

regards, tom lane
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-23 Thread Tomeh, Husam
 
What's more interesting is this:

When I first connect to the database via psql and issue the create
index statement, of course,  I get the out of memory error. If I
don't quit my current session and re-ran the same DDL statement again,
the index gets created successfully!.. However, if after my first
unsuccessful run, I exit my session and re-connect again, and then run
the DDL, it will fail again and get the same error. I have done that for
many times and appears to have a consistent pattern of behavior. Not
sure if that'll help, but I thought it may be an interesting observation
to think about.


 
Husam Tomeh

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tomeh,
Husam
Sent: Thursday, February 23, 2006 11:57 AM
To: Tom Lane
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

 
Thank for looking into this Tom. Here's the output from PostgreSQL log:

***  Postgresql Log:

TopMemoryContext: 32768 total in 4 blocks; 7232 free (9 chunks); 25536
used
Operator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256
used
TopTransactionContext: 8192 total in 1 blocks; 6816 free (0 chunks);
1376 used
MessageContext: 8192 total in 1 blocks; 7104 free (1 chunks); 1088 used
smgr relation table: 8192 total in 1 blocks; 2872 free (0 chunks); 5320
used
Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1077575324 total in 115158 blocks; 1860896 free
(115146 chunks); 1075714428 used
ExecutorState: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
Relcache by OID: 8192 total in 1 blocks; 3896 free (0 chunks); 4296 used
CacheMemoryContext: 516096 total in 6 blocks; 198480 free (2 chunks);
317616 used
mort_ht: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_depend_depender_index: 1024 total in 1 blocks; 256 free (0 chunks);
768 used
pg_depend_reference_index: 1024 total in 1 blocks; 256 free (0 chunks);
768 used
pg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
pg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0
chunks); 832 used
pg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
pg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_conversion_default_index: 1024 total in 1 blocks; 192 free (0
chunks); 832 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_cast_source_target_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks);
768 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 328 free (0 chunks); 696
used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks);
768 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
MdSmgr: 8192 total in 1 blocks

Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-23 Thread Tom Lane
Tomeh, Husam [EMAIL PROTECTED] writes:
 When I first connect to the database via psql and issue the create
 index statement, of course,  I get the out of memory error. If I
 don't quit my current session and re-ran the same DDL statement again,
 the index gets created successfully!.. However, if after my first
 unsuccessful run, I exit my session and re-connect again, and then run
 the DDL, it will fail again and get the same error. I have done that for
 many times and appears to have a consistent pattern of behavior.

Now that you know how to reproduce it, please have another go at getting
that stack trace.  The palloc printout certainly looks like some kind of
memory-leak issue, but I can't tell more than that from it.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] 0ut of Memory Error during Vacuum Analyze

2006-02-14 Thread Tomeh, Husam
 
This is the second time I'm getting out of memory error when I start a
database vacuum or try to vacuum any table. Note this machine has been
used for data load batch purposes. 

=# vacuum analyze code;
ERROR:  out of memory
DETAIL:  Failed on request of size 1073741820.

I'm running Postgres 8.1.1 on RedHat 2.6 kernel (HP server). 
My maintenance work area never been changed. It's set to 1GB.
(maintenance_work_mem = 1048576). Physical memory: 32 GB.  

Bouncing the database does not help. 

Two workarounds I have used so far:

  1) Decreasing the maintenance_work_mem to 512MB, vacuum analyze would
work just fine.

Or 

  2) Bouncing the server (maintaining the original 1GB
maintenance_work_mem) would also work.

I have not had that error on the production instances (which are
identical copies of the loading instance) - only the loading instance..

Any explanation as to why and how to avoid that ?  Thanks



 
Husam  
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze

2006-02-14 Thread Tom Lane
Tomeh, Husam [EMAIL PROTECTED] writes:
 =# vacuum analyze code;
 ERROR:  out of memory
 DETAIL:  Failed on request of size 1073741820.

That looks a whole lot like a corrupt-data issue.  The apparent
dependency on maintenance_work_mem is probably illusory --- I suspect
some of your trials are selecting the corrupted row to use in the
ANALYZE stats, and others are randomly selecting other rows.

If you are able to pg_dump the table in question then this theory is
wrong, but I'd suggest trying that first.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and Create Index

2006-02-14 Thread Tom Lane
Tomeh, Husam [EMAIL PROTECTED] writes:
 I have run pg_dump and had no errors. I also got this error when
 creating one index but not another. When I lowered my
 maintenance_work_mem, the create index succeeded. 

Create index too?  Hm.  That begins to sound more like a memory leak.
Do you have any custom data types or anything like that in this
table?  Can you put together a self-contained test case using dummy
data?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-14 Thread Tomeh, Husam

No special data types. The table is pretty large one with over 15GB. The
index is about 1.5 GB. Here's the table structure :

 Column  | Type  | Modifiers
-+---+---
 county_id   | numeric(5,0)  | not null
 batch_dt| numeric(8,0)  | not null
 batch_seq   | numeric(5,0)  | not null
 mtg_seq_nbr | numeric(1,0)  | not null
 mtg_rec_dt  | numeric(8,0)  |
 mtg_doc_nbr | character varying(12) |
 mtg_rec_bk  | character varying(6)  |
 mtg_rec_pg  | character varying(6)  |
 mtg_amt | numeric(11,0) |
 lndr_cd | character varying(10) |
 lndr_nm | character varying(30) |
 mtg_assm_ind| character(1)  |
 mtg_typ | character varying(5)  |
 adj_rate_ind| character(1)  |
 mtg_term_nbr| numeric(5,0)  |
 mtg_term_cd | character varying(4)  |
 mtg_due_dt  | numeric(8,0)  |
 mtg_deed_typ| character varying(6)  |
 reverse_mtg_ind | character(1)  |
 refi_ind| character(1)  |
 conform_ind | character(1)  |
 cnstr_ln_ind| character(1)  |
 title_co_cd | character varying(5)  |
 state_id| numeric(5,0)  |
 msa | numeric(4,0)  |
Indexes:
uq_mortgage UNIQUE, btree (county_id, batch_dt, batch_seq,
mtg_seq_nbr)
mortgage_idxc_county_id_mtg_rec_dt btree (county_id, mtg_rec_dt)
mortgage_idxc_state_id_mtg_rec_dt btree (state_id, mtg_rec_dt)

-

 Here's the test I did with maintenance_work_mem = 1GB:


mtrac=# show maintenance_work_mem ;
 maintenance_work_mem
--
 1048576==
(1 row)

mtrac=#
mtrac=#
mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);
ERROR:  out of memory   ===
DETAIL:  Failed on request of size 134217728.   ===

 Then I changed the parameter to 512 MB:  


mtrac=# show maintenance_work_mem ;
 maintenance_work_mem
--
 524288 ===
(1 row)

mtrac=#  create index mort_ht_512 on mortgage(county_id,mtg_rec_dt);
CREATE INDEX
---



Regards,

 
Husam  

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 14, 2006 2:16 PM
To: Tomeh, Husam
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and
Create Index

Tomeh, Husam [EMAIL PROTECTED] writes:
 I have run pg_dump and had no errors. I also got this error when
 creating one index but not another. When I lowered my
 maintenance_work_mem, the create index succeeded. 

Create index too?  Hm.  That begins to sound more like a memory leak.
Do you have any custom data types or anything like that in this
table?  Can you put together a self-contained test case using dummy
data?

regards, tom lane
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and Create Index

2006-02-14 Thread Tom Lane
Tomeh, Husam [EMAIL PROTECTED] writes:
 mtrac=# show maintenance_work_mem ;
  maintenance_work_mem
 --
  1048576==
 (1 row)

 mtrac=#
 mtrac=#
 mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);
 ERROR:  out of memory   ===
 DETAIL:  Failed on request of size 134217728.   ===

It would be useful to look at the detailed allocation info that this
(should have) put into the postmaster log.  Also, if you could get
a stack trace back from the error, that would be even more useful.
To do that,
* start psql
* determine PID of connected backend (use pg_backend_pid())
* in another window, as postgres user,
gdb /path/to/postgres backend-PID
gdb break errfinish
gdb cont
* issue failing command in psql
* when breakpoint is reached,
gdb bt
... stack trace printed here ...
gdb q

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend