Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-08 Thread Mohamed Hashim
Hi Sam,Tomas

In my first post i have mentioned all how much shared (shared buffers,
effective cache size, work mem, etc.) and my OS and hardware information
and what are the basic settings i have changed

and regarding Explain analyze i gave one sample query because if i tune
that particular table which has records almost 16crore i thought my problem
will solve...

Regards
Hashim

On Tue, Nov 8, 2011 at 5:20 PM, Tomas Vondra t...@fuzzy.cz wrote:

 On 8 Listopad 2011, 4:21, Mohamed Hashim wrote:
  Hi all,
 
  Thanks for all your responses.
 
  Sorry for late response
 
  Earlier we used Postgres8.3.10 with Desktop computer (as server) and
  configuration of the system (I2 core with 4GB RAM) and also the
  application
  was slow  i dint change any postgres config settings.
 
  May be because of low config We thought the aplication is slow so we
 opted
  to go for higher configuration server(with RAID 1) which i mentioned
  earlier.
 
  I thought the application will go fast but unfortunately there is no
  improvement so i tried to change the postgres config settings and trying
  to
  tune my queries wherever possible but still i was not able
  to..improve the performance..

 As Sam Gendler already wrote, we really can't help you until you post all
 the relevant info. So far we've seen a single EXPLAIN ANALYZE output and
 very vague description of the hardware.

 We need to know more about the hardware and the basic config options
 (shared buffers, effective cache size, work mem, etc.). We need to know
 how much memory is actually available to PostgreSQL and page cache (how
 much is consumed by the application - as I understand it it runs on the
 same machine). We need to know what OS it's running on, and we need to see
 iostat/vmstat output collected when the app is slow.

 Please read this and perform the basic tuning (and let us know what values
 you've used):

 http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

 Also post

  So will it helpful if we try GIST or GIN for integer array[] colum
  (source_detail) with enable_seqscan=off and
  default_statistics_target=1000?

 This is severely premature - it might help, but you should do the basic
 tuning first. It might actually cause you more trouble. You've already
 done this mistake - fixing something withouth veryfying it's actually a
 problem - by requesting a RAID1 config. Don't do that mistake again.

 Tomas




-- 
Regards
Mohamed Hashim.N
Mobile:09894587678


Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-07 Thread Mohamed Hashim
Hi all,

Thanks for all your responses.

Sorry for late response

Earlier we used Postgres8.3.10 with Desktop computer (as server) and
configuration of the system (I2 core with 4GB RAM) and also the application
was slow  i dint change any postgres config settings.

May be because of low config We thought the aplication is slow so we opted
to go for higher configuration server(with RAID 1) which i mentioned
earlier.

I thought the application will go fast but unfortunately there is no
improvement so i tried to change the postgres config settings and trying to
tune my queries wherever possible but still i was not able
to..improve the performance..


So will it helpful if we try GIST or GIN for integer array[] colum
(source_detail) with enable_seqscan=off and default_statistics_target=1000?

Regards
Hashim



On Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni roadrunn...@gmx.at wrote:

 Am 03.11.2011 17:08, schrieb Tomas Vondra:

 On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
 snip

 No doubt about that, querying tables using conditions on array columns is
 not the best direction in most cases, especially when those tables are
 huge.

 Still, the interesting part here is that the OP claims this worked just
 fine in the older version and after an upgrade the performance suddenly
 dropped. This could be caused by many things, and we're just guessing
 because we don't have any plans from the old version.

 Tomas



 Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about
 problems after the migration from 8.x to 9.x. Or did I miss something here?

 Regards,
 Mario



 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**
 org pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-performancehttp://www.postgresql.org/mailpref/pgsql-performance




-- 
Regards
Mohamed Hashim.N
Mobile:09894587678


Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-02 Thread Mohamed Hashim
Dear All

Thanks for your suggestions  replies.

The below are the sample query which i put for particular one bill_id

EXPLAIN ANALYZE SELECT abd.bill_no as
bill_no,to_char(abd.bill_date,'dd/mm/') AS date,mp.product_desc as
product_desc,std.quantity,std.area,rip.price AS rate
FROM acc_bill_items_106 abi
JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id
JOIN stk_source ss ON  ss.source_detail[1]=1 and
ss.source_detail[2]=abi.item_id
JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id
JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id
JOIN master_product_106 mp ON mp.product_id= sd106.product_id
JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id
WHERE abi.bill_id=12680;



QUERY
PLAN

---
 Nested Loop  (cost=0.00..7230339.59 rows=54369 width=39) (actual
time=158156.895..158157.206 rows=1 loops=1)
   -  Nested Loop  (cost=0.00..7149579.10 rows=8192 width=32) (actual
time=158156.863..158157.172 rows=1 loops=1)
 -  Nested Loop  (cost=0.00..7119922.60 rows=8192 width=27)
(actual time=158156.855..158157.164 rows=1 loops=1)
   -  Nested Loop  (cost=0.00..7086865.70 rows=8192 width=19)
(actual time=158156.835..158157.143 rows=1 loops=1)
 Join Filter: (abi.item_id = ss.source_detail[2])
 -  Nested Loop  (cost=0.00..604.54 rows=2 width=23)
(actual time=2.782..2.786 rows=1 loops=1)
   -  Index Scan using acc_bill_details_106_pkey
on acc_bill_details_106 abd  (cost=0.00..6.29 rows=1 width=12) (actual
time=0.010..0.012 rows=1 loops=1)
 Index Cond: (bill_id = 12680)
   -  Nested Loop  (cost=0.00..598.19 rows=2
width=19) (actual time=2.770..2.772 rows=1 loops=1)
 Join Filter: (abi.item_id =
rip.receipt_item_id)
 -  Seq Scan on receipt_item_price_106
rip  (cost=0.00..162.48 rows=4216 width=11) (actual time=0.005..0.562
rows=4218 loops=1)
 -  Materialize  (cost=0.00..140.59 rows=2
width=8) (actual time=0.000..0.000 rows=1 loops=4218)
   -  Seq Scan on acc_bill_items_106
abi  (cost=0.00..140.58 rows=2 width=8) (actual time=0.412..0.412 rows=1
loops=1)
 Filter: (bill_id = 12680)
 -  Materialize  (cost=0.00..7024562.68 rows=819222
width=33) (actual time=0.035..153869.575 rows=19010943 loops=1)
   -  Append  (cost=0.00..7014065.57 rows=819222
width=33) (actual time=0.034..145403.828 rows=19010943 loops=1)
 -  Seq Scan on stk_source ss
(cost=0.00..45.10 rows=5 width=36) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (source_detail[1] = 1)
 -  Seq Scan on stk_source ss
(cost=0.00..6.32 rows=2596 width=33) (actual time=0.033..118.019
rows=66356 loops=1)
   Filter: (source_detail[1] = 1)
 -  Seq Scan on stk_source ss
(cost=0.00..90405.31 rows=10559 width=33) (actual time=0.010..490.712
rows=288779 loops=1)
   Filter: (source_detail[1] = 1)
 -  Seq Scan on stk_source ss
(cost=0.00..6901388.84 rows=806062 width=33) (actual
time=13.382..142493.302 rows=18655808 loops=1)
   Filter: (source_detail[1] = 1)
   -  Index Scan using sd106_stock_id_idx on stock_details_106
sd106  (cost=0.00..4.00 rows=1 width=8) (actual time=0.014..0.014 rows=1
loops=1)
 Index Cond: (sd106.stock_id = ss.stock_id)
 -  Index Scan using master_product_pkey on master_product_106 mp
(cost=0.00..3.59 rows=1 width=13) (actual time=0.006..0.006 rows=1 loops=1)
   Index Cond: (mp.product_id = sd106.product_id)
   -  Index Scan using std106_stock_id_idx on stock_transaction_detail_106
std  (cost=0.00..9.70 rows=4 width=19) (actual time=0.007..0.009 rows=1
loops=1)
 Index Cond: (std.stock_id = ss.stock_id)
 Total runtime: 158240.795 ms


http://goog_1591150719*http://explain.depesz.com/s/Tyc


*Similarly i have used the queries on various details pages and views that
too if i go for one month transactions its taking so much times.

I will try to upgrade to latest version and will try to tune more my
queries so changing the conf settings wouldn't help for better performance??



Thanks  Regards
Hashim

On Tue, Nov 1, 2011 at 7:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Marcus Engene meng...@engene.se writes:
  After I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of
  problems with queries with many joins. Queries that 

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-31 Thread Mohamed Hashim
Any idea or suggestions how to improve my database best
performance.???

Regards
Hashim

On Sat, Oct 29, 2011 at 9:40 AM, Mohamed Hashim nmdhas...@gmail.com wrote:

 Thanks Alban  Gregg.


 i will describe little more about that table


- We are using PHP application with Apache server  Postgresql 9.0.3
in a dedicated server.
- stk_source table is mainly used to track the transactions from
parent to child

Table _100410.stk_source
 Column |   Type|
 Modifiers
 ---+---+-

  source_id | integer   | not null default
 nextval('source_id_seq'::regclass)
  stock_id  | integer   |
  source_detail | integer[] |
  transaction_reference | integer   |
  is_user_set   | boolean   | default false


 We store transaction_type and transaction_id in source_detail column which
 is an interger array for each transactions

 We use various functions to get the info based on transaction type

 For eg:

 In function to get the batch details we have used as

 FOR batch_id_rec in select distinct(batch_id) from order_status_batches
 osb join batch_status_stock bss on osb.status_id=bss.batch_status_id where
 stock_id in (select source_detail[2] from stk_source where stock_id IN
 (SELECT
 std_i.stock_id

 FROM order_details_shipments
 ods

 JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id AND
 ods.order_id=sps.order_id AND ods.item_id=sps.item_id
 JOIN stock_transaction_detail_106 std ON
 std.transaction_id=sps.transaction_id
 JOIN stock_transaction_detail_106 std_i ON std.stock_id =
 std_i.stock_id AND std_i.transaction_type = 'i'::bpchar
 WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP

 ...

 

 ..

 Similarly we have used in php pages and views

 SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/') AS
 date,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS
 rate,
 FROM acc_bill_items_106 abi
 JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id AND
 abd.bill_status='act'
 JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id and
 ss.source_detail[1]=1
 JOIN stock_transaction_detail_106_table std ON std.stock_id=ss.stock_id
 JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id
 JOIN master_product_106_table mp ON mp.product_id= sd106.product_id
 JOIN receipt_item_price_106_table rip ON
 rip.receipt_item_id=abi.item_id
 WHERE abi.bill_id=$bill_id AND std.transaction_type='o'  ;

 So where ever we have JOIN or used in functions the performance is very
 low some times query returns results takes more than 45 mints.

 Normally if we fetch Select * from some_table..it returns very
 fast because it has less records.

 But when i put Select * from stk_source or to find the actual_cost

 EXPLAIN ANALYZE SELECT * FROM stk_source;

 i couln't able to retrieve the planner details waited for more than 50 to
 60 mints

 so question is in spite of having good server with high configuration and
 also changed the postgresql configuration settings then why the system is
 crawling?


 *What are the other parameters have to look out or what are the other
 config settings to be change to have the best performance??*

 Kindly help to sort out this problem..


 Thanks in advance..!!

 Regards
 Hashim








 On Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys haram...@gmail.comwrote:

 On 28 October 2011 09:02, Mohamed Hashim nmdhas...@gmail.com wrote:
  EXPLAIN select * from stk_source ;
   QUERY
  PLAN
 
 -
   Result  (cost=0.00..6575755.39 rows=163132513 width=42)
 -  Append  (cost=0.00..6575755.39 rows=163132513 width=42)
   -  Seq Scan on stk_source  (cost=0.00..42.40 rows=1080
 width=45)
   -  Seq Scan on stk_source  (cost=0.00..20928.37 rows=519179
  width=42)
   -  Seq Scan on stk_source  (cost=0.00..85125.82 rows=2111794
  width=42)
   -  Seq Scan on stk_source  (cost=0.00..6469658.80
 rows=160500460
  width=42)

 That plan gives you the best possible performance given your query.
 Your example probably doesn't fit the problem you're investigating.

 --
 If you can't see the forest for the trees,
 Cut the trees and you'll see there is no forest.




 --
 Regards
 Mohamed Hashim.N
 Mobile:09894587678




-- 
Regards
Mohamed Hashim.N
Mobile:09894587678


[PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-28 Thread Mohamed Hashim
I have Quadcore server with 8GB RAM

vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5607  @ 2.27GHz
stepping: 2
cpu MHz : 1197.000
cache size  : 8192 KB


MemTotal:8148636 kB
MemFree: 4989116 kB
Buffers:8464 kB
Cached:  2565456 kB
SwapCached:81196 kB
Active:  2003796 kB
Inactive: 843896 kB
Active(anon):1826176 kB
Inactive(anon):   405964 kB
Active(file): 177620 kB
Inactive(file):   437932 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:  16779260 kB
SwapFree:   16303356 kB
Dirty:  1400 kB
Writeback: 0 kB
AnonPages:208260 kB
Mapped:  1092008 kB
Shmem:   1958368 kB
Slab: 224964 kB
SReclaimable:  60136 kB
SUnreclaim:   164828 kB
KernelStack:2864 kB
PageTables:35684 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit:20853576 kB
Committed_AS:3672176 kB
VmallocTotal:   34359738367 kB
VmallocUsed:  303292 kB
VmallocChunk:   34359429308 kB
HardwareCorrupted: 0 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:6144 kB
DirectMap2M: 2082816 kB
DirectMap1G: 6291456 kB

My database size is

pg_size_pretty

 21 GB

i have one table which has data more than 160500460 rows almost...and i
have partioned with yearwise in different schemas

 stk_source
Table _100410.stk_source
Column |   Type|
Modifiers  | Storage  | Description
---+---+-+--+-
 source_id | integer   | not null default
nextval('source_id_seq'::regclass) | plain|
 stock_id  | integer
| | plain|
 source_detail | integer[]
| | extended |
 transaction_reference | integer
| | plain|
 is_user_set   | boolean   | default
false   | plain|
Triggers:
insert_stk_source_trigger BEFORE INSERT ON stk_source FOR EACH ROW
EXECUTE PROCEDURE stk_source_insert_trigger()
Child tables: _100410_200809.stk_source,
  _100410_200910.stk_source,
  _100410_201011.stk_source,
  _100410_201112.stk_source
Has OIDs: yes

Also have indexes

ss_source_id_pk PRIMARY KEY, btree (source_id)
stk_source_stock_id_idx btree (stock_id)


First two years data is very less so no issues

and next two years table size is 2GB  10 GB respectively.

EXPLAIN select * from stk_source ;
 QUERY
PLAN
-
 Result  (cost=0.00..6575755.39 rows=163132513 width=42)
   -  Append  (cost=0.00..6575755.39 rows=163132513 width=42)
 -  Seq Scan on stk_source  (cost=0.00..42.40 rows=1080 width=45)
 -  Seq Scan on stk_source  (cost=0.00..20928.37 rows=519179
width=42)
 -  Seq Scan on stk_source  (cost=0.00..85125.82 rows=2111794
width=42)
 -  Seq Scan on stk_source  (cost=0.00..6469658.80 rows=160500460
width=42)


because of this table my total database performance got affected i want to
optimize the settings by reading the below blogs i have changed some
configurations but no use still sytem is slow
http://comments.gmane.org/gmane.comp.db.postgresql.performance/29561

Actually we are using one *PHP* application in that we have used *Postgresql
9.0.3* database.The server is accessing 40 -50 users dailyso want to
have more performancemy config details are below

Could any one help how to tune the settings for better performance???

Thanks in advance..

# - Memory -

*shared_buffers = 2GB *   # min 128kB
   # (change requires
restart)
#temp_buffers = 8MB # min 800kB
*max_prepared_transactions = 0 *  # zero disables the feature
   # (change requires
restart)

# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.


*work_mem = 48MB *# min 64kB
*maintenance_work_mem = 256MB*# min 1MB
*max_stack_depth = 6MB *  # min 100kB


# - Planner Cost Constants -

*seq_page_cost = 1.0  *   # measured on an arbitrary scale

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-28 Thread Mohamed Hashim
Actually we are using various views and functions to get the info for
reporting purpose in that views or functions we have used or joined the
above table mentioned.

I thought of will get reply from any one from the lists so only i put anyway
i will continue with only pgsql-performance mailing lists.

Regards
Hashim

On Fri, Oct 28, 2011 at 1:28 PM, Gregg Jaskiewicz gryz...@gmail.com wrote:

 what sort of queries you are running against it ? the select * from..
 is not really (hopefully) a query you are running from your php app.




-- 
Regards
Mohamed Hashim.N
Mobile:09894587678


Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-28 Thread Mohamed Hashim
Thanks Alban  Gregg.


i will describe little more about that table


   - We are using PHP application with Apache server  Postgresql 9.0.3 in a
   dedicated server.
   - stk_source table is mainly used to track the transactions from parent
   to child

   Table _100410.stk_source
Column |   Type|
Modifiers
---+---+-
 source_id | integer   | not null default
nextval('source_id_seq'::regclass)
 stock_id  | integer   |
 source_detail | integer[] |
 transaction_reference | integer   |
 is_user_set   | boolean   | default false


We store transaction_type and transaction_id in source_detail column which
is an interger array for each transactions

We use various functions to get the info based on transaction type

For eg:

In function to get the batch details we have used as

FOR batch_id_rec in select distinct(batch_id) from order_status_batches osb
join batch_status_stock bss on osb.status_id=bss.batch_status_id where
stock_id in (select source_detail[2] from stk_source where stock_id IN
(SELECT
std_i.stock_id

FROM order_details_shipments
ods

JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id AND
ods.order_id=sps.order_id AND ods.item_id=sps.item_id
JOIN stock_transaction_detail_106 std ON
std.transaction_id=sps.transaction_id
JOIN stock_transaction_detail_106 std_i ON std.stock_id =
std_i.stock_id AND std_i.transaction_type = 'i'::bpchar
WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP

...



..

Similarly we have used in php pages and views

SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/') AS
date,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS
rate,
FROM acc_bill_items_106 abi
JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id AND
abd.bill_status='act'
JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id and
ss.source_detail[1]=1
JOIN stock_transaction_detail_106_table std ON std.stock_id=ss.stock_id
JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id
JOIN master_product_106_table mp ON mp.product_id= sd106.product_id
JOIN receipt_item_price_106_table rip ON rip.receipt_item_id=abi.item_id
WHERE abi.bill_id=$bill_id AND std.transaction_type='o'  ;

So where ever we have JOIN or used in functions the performance is very low
some times query returns results takes more than 45 mints.

Normally if we fetch Select * from some_table..it returns very fast
because it has less records.

But when i put Select * from stk_source or to find the actual_cost

EXPLAIN ANALYZE SELECT * FROM stk_source;

i couln't able to retrieve the planner details waited for more than 50 to 60
mints

so question is in spite of having good server with high configuration and
also changed the postgresql configuration settings then why the system is
crawling?


*What are the other parameters have to look out or what are the other config
settings to be change to have the best performance??*

Kindly help to sort out this problem..


Thanks in advance..!!

Regards
Hashim







On Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys haram...@gmail.com wrote:

 On 28 October 2011 09:02, Mohamed Hashim nmdhas...@gmail.com wrote:
  EXPLAIN select * from stk_source ;
   QUERY
  PLAN
 
 -
   Result  (cost=0.00..6575755.39 rows=163132513 width=42)
 -  Append  (cost=0.00..6575755.39 rows=163132513 width=42)
   -  Seq Scan on stk_source  (cost=0.00..42.40 rows=1080
 width=45)
   -  Seq Scan on stk_source  (cost=0.00..20928.37 rows=519179
  width=42)
   -  Seq Scan on stk_source  (cost=0.00..85125.82 rows=2111794
  width=42)
   -  Seq Scan on stk_source  (cost=0.00..6469658.80
 rows=160500460
  width=42)

 That plan gives you the best possible performance given your query.
 Your example probably doesn't fit the problem you're investigating.

 --
 If you can't see the forest for the trees,
 Cut the trees and you'll see there is no forest.




-- 
Regards
Mohamed Hashim.N
Mobile:09894587678