Re: [PERFORM] [ADMIN] Database size growing over time and leads to performance impact

2010-03-30 Thread Tomeh, Husam
You may want to consider performing more frequent vacuums a week or really 
considering leveraging autovacuum if it makes sense to your transactions 
volume.  

Regards,
  Husam 

-Original Message-
From: Gnanakumar gna...@zoniac.com
Sent: Saturday, March 27, 2010 6:06 AM
To: pgsql-ad...@postgresql.org pgsql-ad...@postgresql.org; 
pgsql-performance@postgresql.org pgsql-performance@postgresql.org
Subject: [ADMIN] Database size growing over time and leads to performance impact

Hi,

 

We're using PostgreSQL 8.2.  Recently, in our production database, there was
a severe performance impact..  Even though, we're regularly doing both:

1. VACUUM FULL ANALYZE once in a week during low-usage time and

2. ANALYZE everyday at low-usage time

 

Also, we noticed that the physical database size has grown upto 30 GB.  But,
if I dump the database in the form of SQL and import it locally in my
machine, it was only 3.2 GB.  Then while searching in Google to optimize
database size, I found the following useful link:

 

http://www.linuxinsight.com/optimize_postgresql_database_size.html

 

It says that even vacuumdb or reindexdb doesn't really compact database
size, only dump/restore does because of MVCC architecture feature in
PostgreSQL and this has been proven here.

 

So, finally we decided to took our production database offline and performed
dump/restore.  After this, the physical database size has also reduced from
30 GB to 3.5 GB and the performance was also very good than it was before.

 

Physical database size was found using the following command:

du -sh /usr/local/pgsql/data/base/database-oid

 

I also cross-checked this size using
pg_size_pretty(pg_database_size(datname)).

 

Questions

1. Is there any version/update of PostgreSQL addressing this issue?

2. How in real time, this issues are handled by other PostgreSQL users
without taking to downtime?

3. Any ideas or links whether this is addressed in upcoming PostgreSQL
version 9.0 release?

 

**
 
This message may contain confidential or proprietary information intended only 
for the use of the 
addressee(s) named above or may contain information that is legally privileged. 
If you are 
not the intended addressee, or the person responsible for delivering it to the 
intended addressee, 
you are hereby notified that reading, disseminating, distributing or copying 
this message is strictly 
prohibited. If you have received this message by mistake, please immediately 
notify us by  
replying to the message and delete the original message and any copies 
immediately thereafter. 

Thank you. 
**
 
FACLD


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Optimizing a query

2006-12-13 Thread Tomeh, Husam

Have you run vacuum/analyze on the table?

--
  Husam 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of James Cloos
Sent: Wednesday, December 13, 2006 10:48 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Optimizing a query

I've currently got this table:

,
| n=# \d nanpa
|  Table public.nanpa
|Column   | Type | Modifiers 
| +--+---
|  state  | character(2) | 
|  npa| character(3) | not null
|  nxx| character(3) | not null
|  ocn| character(4) | 
|  company| text | 
|  ratecenter | text | 
|  switch | text | 
|  effective  | date | 
|  use| character(2) | not null
|  assign | date | 
|  ig | character(1) | 
| Indexes:
| nanpa_pkey PRIMARY KEY, btree (npa, nxx) CLUSTER
`

and was doing queries of the form:

,
| select * from nanpa where npa=775 and nxx=413;
`

where were quite slow.  Explain showed that it was doing sequential
scans even though the primary key contained the two term I was
selecting on.

Today, looking at it again in prep to this post, I noticed that the
numbers were being converted to ::text, and a quick test showed that
queries of the form:

,
| select * from nanpa where npa=775::bpchar and nxx=413::bpchar;
`

used the index.

I specified char(3) when I created the table simple because npa and
nxx are defined as three-character strings.  Tagging the queies is
a pain, especially as I often do queries of that form in psql(1).

(Incidently, there are multiple similar tables, also keyed on
(npa,nxx), which show the same problem.  The nanpa table above is
just a good example.)

Should I convert the columns to text?  Or create an additional index
that expects ::text args?  (If so, how?)

Or is there some other way to ensure the indices get used w/o having
to tag data in the queries?

Thanks,

-JimC
-- 
James Cloos [EMAIL PROTECTED] OpenPGP: 1024D/ED7DAEA6

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Optimizing a query

2006-12-13 Thread Tomeh, Husam

Your nap and nxx columns have character datatype, so you should use
quotes. Try:

explain analyze  select * from nanpa where npa='775' and
nxx='473'; 

If that does not work, you could try to influence the planner's
execution plan to favor index scans over sequential scan by tweaking a
couple of the postgres parameters, particularly, the
effective_cache_size. This parameter primarily set the planner's
estimates of the relative likelihood of a particular table or index
being in memory, and will thus have a significant effect on whether the
planner chooses indexes over seqscans. Tweaking such parameters are
usually done as a last resort.

--
  Husam 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of James Cloos
Sent: Wednesday, December 13, 2006 2:35 PM
To: Tomeh, Husam
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Optimizing a query

 Husam == Tomeh, Husam [EMAIL PROTECTED] writes:

Husam Have you run vacuum/analyze on the table?

Yes, back when I first noticed how slow it was.
It did not make any difference.

explain analyze says:

,
| n=# explain analyse  select * from nanpa where npa=775 and nxx=473;
|QUERY PLAN

|


|  Seq Scan on nanpa  (cost=0.00..5344.60 rows=4 width=105) (actual
time=371.718..516.816 rows=1 loops=1)
|Filter: (((npa)::text = '775'::text) AND ((nxx)::text =
'473'::text))
|  Total runtime: 516.909 ms
| (3 rows)
`

vs:

,
| n=# explain analyse  select * from nanpa where npa=775::char and
nxx=473::char;
|   QUERY PLAN

|

--
|  Index Scan using nanpa_pkey on nanpa  (cost=0.00..4.33 rows=1
width=105) (actual time=64.831..64.831 rows=0 loops=1)
|Index Cond: ((npa = '7'::bpchar) AND (nxx = '4'::bpchar))
|  Total runtime: 64.927 ms
| (3 rows)
`

BTW, I forgot to mention I'm at 8.1.4 on that box.

-JimC
-- 
James Cloos [EMAIL PROTECTED] OpenPGP: 1024D/ED7DAEA6

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] PostgreSQL Caching

2006-10-03 Thread Tomeh, Husam

Like many descent RDBMS, Postgresql server allocates its own shared
memory area where data is cached in. When receiving a query request,
Postgres engine checks first its shared memory buffers, if not found,
the engine performs disk I/Os to retrieve data from PostgreSQL data
files and place it in the shared buffer area before serving it back to
the client. Blocks in the shared buffers are shared by other sessions
and can therefore be possibly accessed by other sessions. Postgresql
shared buffers can be allocated by setting the postgresql.conf parameter
namely, shared_buffers.

Sincerely,

--
  Husam 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Adnan
DURSUN
Sent: Tuesday, October 03, 2006 2:49 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] PostgreSQL Caching

Hi,

I wonder how PostgreSQL caches the SQL query results. For example ;

* does postgres cache query result in memory that done by
session A 
?
* does session B use these results ?

Best Regards

Adnan DURSUN


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] PostgreSQL Caching

2006-10-03 Thread Tomeh, Husam
 
  * When any session updates the data that already in shared
buffer, 
does Postgres synchronize the data both disk and shared buffers area 
 immediately ?

Not necessarily true. When a block is modified in the shared buffers,
the modified block is written to the Postgres WAL log. A periodic DB
checkpoint is performed to flush the modified blocks in the shared
buffers to the data files.

  * Does postgres cache SQL execution plan analyze results in memory 
 to use for other sessions ? For example ;
When session A execute SELECT * FROM tab WHERE col1 = val1
AND col2 
 = val2, does postgres save the parser/optimizer result in memory in
order
 to use by other session to prevent duplicate execution of
parser 
 and optimizer so therefore get time ?. Because an execution plan is
created 
 before..

Query plans are not stored in the shared buffers and therefore can not
be re-used by other sessions. They're only cached by the connection on a
session level.

Sincerely,

--
  Husam 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Adnan
DURSUN
Sent: Tuesday, October 03, 2006 4:53 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] PostgreSQL Caching


Thanks,

I wonder these ;

* When any session updates the data that allready in shared
buffer, 
does Postgres sychronize the data both disk and shared buffers area 
immediately ?
* Does postgres cache SQL execution plan analyze results in
memory 
to use for other sessions ? For example ;
When session A execute SELECT * FROM tab WHERE col1 = val1 AND
col2 
= val2, does postgres save the parser/optimizer result in memory in
order
 to use by other session to prevent duplicate execution of
parser 
and optimizer so therefore get time ?. Because an execution plan is
created 
before..

Sincenerly

Adnan DURSUN

- Original Message - 
From: Tomeh, Husam [EMAIL PROTECTED]
To: Adnan DURSUN [EMAIL PROTECTED]; 
pgsql-performance@postgresql.org
Sent: Wednesday, October 04, 2006 1:11 AM
Subject: Re: [PERFORM] PostgreSQL Caching



Like many descent RDBMS, Postgresql server allocates its own shared
memory area where data is cached in. When receiving a query request,
Postgres engine checks first its shared memory buffers, if not found,
the engine performs disk I/Os to retrieve data from PostgreSQL data
files and place it in the shared buffer area before serving it back to
the client. Blocks in the shared buffers are shared by other sessions
and can therefore be possibly accessed by other sessions. Postgresql
shared buffers can be allocated by setting the postgresql.conf parameter
namely, shared_buffers.

Sincerely,

--
  Husam

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Adnan
DURSUN
Sent: Tuesday, October 03, 2006 2:49 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] PostgreSQL Caching

Hi,

I wonder how PostgreSQL caches the SQL query results. For example ;

* does postgres cache query result in memory that done by
session A
?
* does session B use these results ?

Best Regards

Adnan DURSUN


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
**
This message contains confidential information intended only for the use
of 
the addressee(s) named above and may contain information that is legally

privileged.  If you are not the addressee, or the person responsible for

delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly
prohibited. 
If you have received this message by mistake, please immediately notify
us 
by replying to the message and delete the original message immediately 
thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-23 Thread Tomeh, Husam
, 2006 3:49 PM
To: Tomeh, Husam
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and
Create Index

Tomeh, Husam [EMAIL PROTECTED] writes:
 mtrac=# show maintenance_work_mem ;
  maintenance_work_mem
 --
  1048576==
 (1 row)

 mtrac=#
 mtrac=#
 mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);
 ERROR:  out of memory
===
 DETAIL:  Failed on request of size 134217728.   ===

It would be useful to look at the detailed allocation info that this
(should have) put into the postmaster log.  Also, if you could get
a stack trace back from the error, that would be even more useful.
To do that,
* start psql
* determine PID of connected backend (use pg_backend_pid())
* in another window, as postgres user,
gdb /path/to/postgres backend-PID
gdb break errfinish
gdb cont
* issue failing command in psql
* when breakpoint is reached,
gdb bt
... stack trace printed here ...
gdb q

regards, tom lane
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-23 Thread Tomeh, Husam
 
What's more interesting is this:

When I first connect to the database via psql and issue the create
index statement, of course,  I get the out of memory error. If I
don't quit my current session and re-ran the same DDL statement again,
the index gets created successfully!.. However, if after my first
unsuccessful run, I exit my session and re-connect again, and then run
the DDL, it will fail again and get the same error. I have done that for
many times and appears to have a consistent pattern of behavior. Not
sure if that'll help, but I thought it may be an interesting observation
to think about.


 
Husam Tomeh

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tomeh,
Husam
Sent: Thursday, February 23, 2006 11:57 AM
To: Tom Lane
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

 
Thank for looking into this Tom. Here's the output from PostgreSQL log:

***  Postgresql Log:

TopMemoryContext: 32768 total in 4 blocks; 7232 free (9 chunks); 25536
used
Operator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256
used
TopTransactionContext: 8192 total in 1 blocks; 6816 free (0 chunks);
1376 used
MessageContext: 8192 total in 1 blocks; 7104 free (1 chunks); 1088 used
smgr relation table: 8192 total in 1 blocks; 2872 free (0 chunks); 5320
used
Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1077575324 total in 115158 blocks; 1860896 free
(115146 chunks); 1075714428 used
ExecutorState: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
Relcache by OID: 8192 total in 1 blocks; 3896 free (0 chunks); 4296 used
CacheMemoryContext: 516096 total in 6 blocks; 198480 free (2 chunks);
317616 used
mort_ht: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_depend_depender_index: 1024 total in 1 blocks; 256 free (0 chunks);
768 used
pg_depend_reference_index: 1024 total in 1 blocks; 256 free (0 chunks);
768 used
pg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
pg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0
chunks); 832 used
pg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
pg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_conversion_default_index: 1024 total in 1 blocks; 192 free (0
chunks); 832 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_cast_source_target_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks);
768 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 328 free (0 chunks); 696
used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks);
768 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
MdSmgr: 8192 total in 1 blocks

[PERFORM] 0ut of Memory Error during Vacuum Analyze

2006-02-14 Thread Tomeh, Husam
 
This is the second time I'm getting out of memory error when I start a
database vacuum or try to vacuum any table. Note this machine has been
used for data load batch purposes. 

=# vacuum analyze code;
ERROR:  out of memory
DETAIL:  Failed on request of size 1073741820.

I'm running Postgres 8.1.1 on RedHat 2.6 kernel (HP server). 
My maintenance work area never been changed. It's set to 1GB.
(maintenance_work_mem = 1048576). Physical memory: 32 GB.  

Bouncing the database does not help. 

Two workarounds I have used so far:

  1) Decreasing the maintenance_work_mem to 512MB, vacuum analyze would
work just fine.

Or 

  2) Bouncing the server (maintaining the original 1GB
maintenance_work_mem) would also work.

I have not had that error on the production instances (which are
identical copies of the loading instance) - only the loading instance..

Any explanation as to why and how to avoid that ?  Thanks



 
Husam  
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-14 Thread Tomeh, Husam

No special data types. The table is pretty large one with over 15GB. The
index is about 1.5 GB. Here's the table structure :

 Column  | Type  | Modifiers
-+---+---
 county_id   | numeric(5,0)  | not null
 batch_dt| numeric(8,0)  | not null
 batch_seq   | numeric(5,0)  | not null
 mtg_seq_nbr | numeric(1,0)  | not null
 mtg_rec_dt  | numeric(8,0)  |
 mtg_doc_nbr | character varying(12) |
 mtg_rec_bk  | character varying(6)  |
 mtg_rec_pg  | character varying(6)  |
 mtg_amt | numeric(11,0) |
 lndr_cd | character varying(10) |
 lndr_nm | character varying(30) |
 mtg_assm_ind| character(1)  |
 mtg_typ | character varying(5)  |
 adj_rate_ind| character(1)  |
 mtg_term_nbr| numeric(5,0)  |
 mtg_term_cd | character varying(4)  |
 mtg_due_dt  | numeric(8,0)  |
 mtg_deed_typ| character varying(6)  |
 reverse_mtg_ind | character(1)  |
 refi_ind| character(1)  |
 conform_ind | character(1)  |
 cnstr_ln_ind| character(1)  |
 title_co_cd | character varying(5)  |
 state_id| numeric(5,0)  |
 msa | numeric(4,0)  |
Indexes:
uq_mortgage UNIQUE, btree (county_id, batch_dt, batch_seq,
mtg_seq_nbr)
mortgage_idxc_county_id_mtg_rec_dt btree (county_id, mtg_rec_dt)
mortgage_idxc_state_id_mtg_rec_dt btree (state_id, mtg_rec_dt)

-

 Here's the test I did with maintenance_work_mem = 1GB:


mtrac=# show maintenance_work_mem ;
 maintenance_work_mem
--
 1048576==
(1 row)

mtrac=#
mtrac=#
mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);
ERROR:  out of memory   ===
DETAIL:  Failed on request of size 134217728.   ===

 Then I changed the parameter to 512 MB:  


mtrac=# show maintenance_work_mem ;
 maintenance_work_mem
--
 524288 ===
(1 row)

mtrac=#  create index mort_ht_512 on mortgage(county_id,mtg_rec_dt);
CREATE INDEX
---



Regards,

 
Husam  

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 14, 2006 2:16 PM
To: Tomeh, Husam
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and
Create Index

Tomeh, Husam [EMAIL PROTECTED] writes:
 I have run pg_dump and had no errors. I also got this error when
 creating one index but not another. When I lowered my
 maintenance_work_mem, the create index succeeded. 

Create index too?  Hm.  That begins to sound more like a memory leak.
Do you have any custom data types or anything like that in this
table?  Can you put together a self-contained test case using dummy
data?

regards, tom lane
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Should Oracle outperform PostgreSQL on a complex

2005-12-13 Thread Tomeh, Husam
 
Postgres 8.1 performance rocks (compared with 8.0) specially with the use 
in-memory index bitmaps. Complex queries that used to take 30+ minutes, it 
takes now a few minutes to complete in 8.1.  Many thanks to the all wonderful 
developers for the huge 8.1 performance boost.

---
 
  Husam Tomeh

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Sunday, December 11, 2005 12:39 PM
To: Pål Stenslet
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Should Oracle outperform PostgreSQL on a complex 
multidimensional query?

Perhaps you should be trying this on PG 8.1?  In any case, without
specific details of your schema or a look at EXPLAIN ANALYZE results,
it's unlikely that anyone is going to have any useful comments for you.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0

2005-09-22 Thread Tomeh, Husam
 
Have tried adjusting the effective_cache_size so that you don't the
planner may produce a better explain plan for you and not needing to set
seqscan to off.


-- 
 Husam 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jean-Pierre
Pelletier
Sent: Thursday, September 22, 2005 3:28 PM
To: John Arbash Meinel
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0

With enable-seq-scan = off, it runs in 350 ms so better than 484 ms
but still much slower than 32 ms in 8.0.1.

==

Table public.content
   Column   |  Type   | Modifiers
+-+---
 contentid  | integer | not null
 supplierid | integer |
 priceid| integer |

  Table public.price
Column |  Type  | Modifiers
---++---
 priceid   | integer| not null
 itemid| integer|
 supplierid| integer|
 locationid| smallint   |
 fromdate  | date   |
 unitvalue | numeric|
 insertedbypersonid| integer|
 lastupdatedbypersonid | integer|
 inserttimestamp   | timestamp(0) without time zone |
 lastupdatetimestamp   | timestamp(0) without time zone |
Indexes:
price_pkey PRIMARY KEY, btree (priceid)

   Table public.supplier
   Column|  Type  | 
Modifie
rs
-++-

-
 supplierid  | integer| not null default

nextval
('SupplierId'::text)
 supplierdescription | character varying(50)  | not null
 inserttimestamp | timestamp(0) without time zone | default now()
 approvaldate| date   |
Indexes:
Supplier Id PRIMARY KEY, btree (supplierid)
Supplier Description UNIQUE, btree
(upper(supplierdescription::text))
Supplier.InsertTimestamp btree (inserttimestamp)
Check constraints:
Supplier Name cannot be empty CHECK
(btrim(supplierdescription::text) 
 ''::tex





Explan analyze with enable-seq-scan = off on 8.1 beta2
 QUERY
PLAN




 Merge Left Join  (cost=10005.60..101607964.74 rows=1 width=0)
(actual 
time=
729.067..729.078 rows=1 loops=1)
   Merge Cond: (outer.priceid = inner.priceid)
   -  Sort  (cost=10005.60..10005.60 rows=1 width=4) (actual 
time=0.064
..0.067 rows=1 loops=1)
 Sort Key: c.priceid
 -  Nested Loop Left Join  (cost=1.00..10005.59
rows=1 
widt
h=4) (actual time=0.038..0.049 rows=1 loops=1)
   -  Seq Scan on content c
(cost=1.00..10001.01 
ro
ws=1 width=8) (actual time=0.008..0.011 rows=1 loops=1)
   -  Index Scan using Supplier Id on supplier s 
(cost=0.00..4.5
6 rows=1 width=4) (actual time=0.016..0.019 rows=1 loops=1)
 Index Cond: (outer.supplierid = s.supplierid)
   -  Index Scan using Price Id on price p  (cost=0.00..1606505.44 
rows=58147
5 width=4) (actual time=0.008..370.854 rows=164842 loops=1)
 Total runtime: 729.192 ms

- Original Message - 
From: John Arbash Meinel [EMAIL PROTECTED]
To: Jean-Pierre Pelletier [EMAIL PROTECTED]
Cc: pgsql-performance@postgresql.org
Sent: Thursday, September 22, 2005 6:03 PM
Subject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

**
This message contains confidential information intended only for the 
use of the addressee(s) named above and may contain information that 
is legally privileged.  If you are not the addressee, or the person 
responsible for delivering it to the addressee, you are hereby 
notified that reading, disseminating, distributing or copying this 
message is strictly prohibited.  If you have received this message by 
mistake, please immediately notify us by replying to the message and 
delete the original message immediately thereafter.

Thank you.   FADLD Tag

Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0

2005-09-22 Thread Tomeh, Husam
 
The recommendation for effective_cache_size is about 2/3 of your
server's physical RAM (if the server is dedicated only for postgres).
This should have a significant impact on whether Postgres planner
chooses indexes over sequential scans. 

-- 
 Husam 


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jean-Pierre
Pelletier
Sent: Thursday, September 22, 2005 4:10 PM
To: josh@agliodbs.com
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0

my settings are:

effective_cache_size = 1000
random_page_cost = 4
work_mem = 2

- Original Message - 
From: Josh Berkus josh@agliodbs.com
To: Jean-Pierre Pelletier [EMAIL PROTECTED]
Sent: Thursday, September 22, 2005 6:58 PM
Subject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0


 Jean-Pierre,
 
 How do I produce an Index scan plan ?
 
 You just did.   What's your effective_cache_size set to?   
 random_page_cost?  work_mem?
 
 -- 
 --Josh
 
 Josh Berkus
 Aglio Database Solutions
 San Francisco

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

**
This message contains confidential information intended only for the 
use of the addressee(s) named above and may contain information that 
is legally privileged.  If you are not the addressee, or the person 
responsible for delivering it to the addressee, you are hereby 
notified that reading, disseminating, distributing or copying this 
message is strictly prohibited.  If you have received this message by 
mistake, please immediately notify us by replying to the message and 
delete the original message immediately thereafter.

Thank you.   FADLD Tag
**


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[PERFORM] Vacuum Full Analyze taking so long

2005-07-25 Thread Tomeh, Husam
 
I have an 8.02 postgresql database with about 180 GB in size, running on
2.6 RedHat kernel with 32 GB of RAM and 2 CPUs. I'm running the vacuum
full analyze command, and has been running for at least two consecutive
days with no other processes running (it's an offline loading server). I
tweaked the maintenanace_mem to its max (2 GB) with work_mem of 8M. I
have no issues with my checkpoints. I can still I/O activities against
the physical files of the property table and its two indexes (primary
key and r index). The property files are about 128GB and indexes are
about 15 GB. I have run the same maintenance job on a different box
(staging) with identical hardware config (except with 64 GB instead of
32) and took less than 12 hours.   Any clue or tip is really
appreciated. 

Also read a comment by Tom Lane, that terminating the process should be
crash-safe if I had to. 

Thanks,


-- 
 Husam 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

**
This message contains confidential information intended only for the 
use of the addressee(s) named above and may contain information that 
is legally privileged.  If you are not the addressee, or the person 
responsible for delivering it to the addressee, you are hereby 
notified that reading, disseminating, distributing or copying this 
message is strictly prohibited.  If you have received this message by 
mistake, please immediately notify us by replying to the message and 
delete the original message immediately thereafter.

Thank you.   FADLD Tag
**


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster