[PERFORM] How to reduce IOWAIT and CPU idle time?

2016-09-10 Thread Samir Magar
Hello,


My Application has normally 25 to 30 connections and it is doing lot of
insert/update/delete operation.
The database size is 100GB.
iowait  is at 40% to 45 % and CPU idle time is at 45% to 50%
TOTAL RAM = 8 GB   TOTAL CPU = 4

postgresql.conf parametre:

shared_buffers = 2GB
work_mem = 100MB
effective_cache_size = 2GB
maintenance_work_mem = 500MB
autovacuum = off
wal_buffers = 64MB


How can i reduce iowait and CPU idle time. It is slowing all the queries.
The queries that used to take 1 sec,it is taking 12-15 seconds.


version detials:
Linux zabbix-inst.novalocal 3.10.0-229.7.2.el7.x86_64 #1 SMP Fri May 15
21:38:46 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux
database version: postgrsql 9.2.13 community.

Thanks,
Samir Magar


Re: [PERFORM] How to reduce IOWAIT and CPU idle time?

2016-09-11 Thread Samir Magar
Hello Venkata,

Thank you for the reply!

I forgot to specify the application name.It is ZABBIX tool using postgreSQL
database. All types of queries are running slow and i can see application
is writing continuously.
Yesterday , i updated effective_io_concurrency to 25 which earlier set as
default. But this has not helped me to solve the problem.

Yes, you are right, the database size has grown from 5 GB database to 100
GB database and may be there is problem in slownesss in disk. However we
cannot replace the disk right now.

I ran vacuum and analyze manually on all the tables as well. Still it is
not helping. Can you think of any other setting which i should enable?


Thanks,
Samir Magar

On Sun, Sep 11, 2016 at 6:04 AM, Venkata B Nagothi 
wrote:

>
> On Sat, Sep 10, 2016 at 8:49 PM, Samir Magar 
> wrote:
>
>> Hello,
>>
>>
>> My Application has normally 25 to 30 connections and it is doing lot of
>> insert/update/delete operation.
>> The database size is 100GB.
>> iowait  is at 40% to 45 % and CPU idle time is at 45% to 50%
>> TOTAL RAM = 8 GB   TOTAL CPU = 4
>>
>> postgresql.conf parametre:
>>
>> shared_buffers = 2GB
>> work_mem = 100MB
>> effective_cache_size = 2GB
>> maintenance_work_mem = 500MB
>> autovacuum = off
>> wal_buffers = 64MB
>>
>>
>> How can i reduce iowait and CPU idle time. It is slowing all the queries.
>> The queries that used to take 1 sec,it is taking 12-15 seconds.
>>
>
> That does not point out the specific problem you are facing. Queries can
> slow down for a lot of reasons like as follows -
>
> - Lack of maintenance
> - Bloats in Tables and Indexes
> - Data size growth
> - If writes are slowing down, then it could be because of slow disks
>
> Are you saying that queries are slowing down when there are heavy writes ?
> Are you referring to SELECTs or all types of queries ?
>
> Regards,
> Venkata B N
>
> Fujitsu Australia
>


Re: [PERFORM] How to reduce IOWAIT and CPU idle time?

2016-09-11 Thread Samir Magar
Hello Jeff,

Thanks for the reply!

Yes, you are right, the database size has grown from 5 GB database to 100
GB database and may be there is problem in slowness in disk. However we
cannot replace the disk right now.

Sure. i will try to increase the shared_buffer value to 90% and see the
performance.


Thanks,
Samir Magar




On Sun, Sep 11, 2016 at 7:16 AM, Jeff Janes  wrote:

> On Sat, Sep 10, 2016 at 3:49 AM, Samir Magar 
> wrote:
>
>> Hello,
>>
>>
>> My Application has normally 25 to 30 connections and it is doing lot of
>> insert/update/delete operation.
>> The database size is 100GB.
>> iowait  is at 40% to 45 % and CPU idle time is at 45% to 50%
>> TOTAL RAM = 8 GB   TOTAL CPU = 4
>>
>> postgresql.conf parametre:
>>
>> shared_buffers = 2GB
>> work_mem = 100MB
>> effective_cache_size = 2GB
>> maintenance_work_mem = 500MB
>> autovacuum = off
>> wal_buffers = 64MB
>>
>>
>> How can i reduce iowait and CPU idle time. It is slowing all the queries.
>> The queries that used to take 1 sec,it is taking 12-15 seconds.
>>
>
> What changed between the 1 sec regime and the 12-15 second regime?  Just
> growth in the database size?
>
> Index-update-intensive databases will often undergo a collapse in
> performance once the portion of the indexes which are being rapidly dirtied
> exceeds shared_buffers + (some kernel specific factor related
> to dirty_background_bytes and kin)
>
> If you think this is the problem, you could try violating the conventional
> wisdom by setting shared_buffers 80% to 90% of available RAM, rather than
> 20% to 25%.
>
> Cheers,
>
> Jeff
>


[PERFORM] Delay in converting logs from ready state to done state

2016-10-11 Thread Samir Magar
Team,

we are seeing delay in converting logs from ready state to done state in
pg_xlog archive status.

we have seen pg_xlog generated 2000 per hour and it is getting archived
1894. So the speed at which the archiving is done is too slow as compare to
the pg_xlog generation

So our pg_xlog directory keeps filling regularly. What should be the real
cause here?

We cannot see any specific error on pg_log  except no space left on device.



current setting:

wal_level = archive

archive_mode = on

max_wal_senders = 3

archive_command = 'gzip < %p > /pgarchive/%f'

checkpoint_segments = 3

checkpoint_timeout = 5min

log_checkpoints = on

archive_timeout = 60





Thanks,

Samir Magar


Fwd: [PERFORM] Delay in converting logs from ready state to done state

2016-10-12 Thread Samir Magar
*Hello Julien,*

*Thank you for your prompt response!*
*we have changed the checkpoint_segment to 170 and use pigz for the pg_xlog
compress.*
*It is working very well now.*

*Thanks again!!*

*Regards,*
*Samir Magar*

On Wed, Oct 12, 2016 at 11:33 AM, Julien Rouhaud 
wrote:

> On 12/10/2016 07:26, Samir Magar wrote:
> > Team,
> >
> > we are seeing delay in converting logs from ready state to done state in
> > pg_xlog archive status.
> >
> > we have seen pg_xlog generated 2000 per hour and it is getting archived
> > 1894. So the speed at which the archiving is done is too slow as compare
> > to the pg_xlog generation
> >
> > So our pg_xlog directory keeps filling regularly. What should be the
> > real cause here?
> >
> > We cannot see any specific error on pg_log  except no space left on
> device.
> >
> >
> >
> > current setting:
> >
> > wal_level = archive
> >
> > archive_mode = on
> >
> > max_wal_senders = 3
> >
> > archive_command = 'gzip < %p > /pgarchive/%f'
> >
>
> You could use pigz which is parallel, that could speed up compression.
>
> > checkpoint_segments = 3
> >
>
> this is way to low. If you generate 2000 WAL per hour, you should
> configure it to something like 170 (or 5 min average if 2000 is a
> spike).  It'll perform less checkpoint and also generate less WALs.
>
>
> --
> Julien Rouhaud
> http://dalibo.com - http://dalibo.org
>


Re: [PERFORM] pg_basebackup running slow

2016-10-14 Thread Samir Magar
What is the settings for max_wal_sender?
you can try increasing this parameter to improve backup performance.


Thanks,
Samir Magar

On Fri, Oct 14, 2016 at 6:05 PM, Swapnil Vaze  wrote:

> Hello,
>
>
>
> We are using postgresql 9.2 on redhat linux instance over openstack cloud.
>
>
>
> Database is around 441 GB.
>
>
>
> We are using below command to take backup:
>
>
>
> pg_basebackup -v -D /pgbackup/$bkupdir -Ft -z -c fast
>
>
>
> Backup size created is around 84GB.
>
>
>
> However, it is taking almost 10 hr 21 minutes to complete.
>
>
>
> Looking for speed improvement?
>
>
>
> Thanks& Regards,
>
> Vaze Swapnil
>


[PERFORM] out of range error while restore using pgdump

2017-01-16 Thread Samir Magar
Hello,
We are migrating database from one server to another.
As such we are not making any changes in the database structure.
Stil getting below error while restore using pgdump.


pg_restore: [archiver (db)] COPY failed for table "tcb_test": ERROR:
 "5.40593839802118076e-315" is out of range for type double precision
CONTEXT:  COPY tcb_test, line 3932596, column va_odometro:
"5.40593839802118076e-315"

Could you please help us how can we avoid solve this error?

postgres version 9.5
OS: Red hat 7.1

Thanks,
Samir Magar


[PERFORM] query performance issue

2017-11-15 Thread Samir Magar
Hello,
I am having performance issues with one of the query.
The query is taking 39 min to fetch 3.5 mil records.

I want to reduce that time to 15 mins.
could you please suggest something to its performance?

server configuration:
 CPUs = 4
memory = 16 GM
shared_buffers = 3 GB
work_mem = 100MB
effective_cache_size = 12 GB

we are doing the vacuum/analyze regularly on the database.

attached is the query with its explain plan.

Thanks,
Samir Magar
query:

SELECT 
DISTINCT
DLR_QLFY.DLR_QLFY_ID as DLR_QLFY_ID, NMQ_REQ.GRACE_PRD as GRACE_PRD, 
NMQ_REQ.HIDE_PRG_FLG as HIDE_PRG_FLG, NMQ_REQ.NTFY_DLR_FLG as NTFY_DLR_FLG, 
DLR_LOC.ACCT_NUM as ACCT_NUM, NMQ_REQ.NMQ_REQ_ID as NMQ_REQ_ID, 
NEW_MDL.PI_MDL_ID as PI_MDL_ID 

 
FROM test.DLR_QLFY INNER JOIN 
(SELECT DLR_GRP.DLR_GRP_ID AS LOC_GRP_ID,LEAD_DLR_LOC_ID,DLR_LOC.ACCT_NUM AS 
LOC_ACCT_NUM FROM test.DLR_GRP, test.DLR_GRP_DLR_XREF, test.DLR_LOC WHERE 
DLR_GRP.DLR_GRP_ID=DLR_GRP_DLR_XREF.DLR_GRP_ID AND 
DLR_GRP_DLR_XREF.DLR_LOC_ID=DLR_LOC.DLR_LOC_ID AND (DLR_GRP.DLR_GRP_TYP='LOC' 
OR DLR_GRP.DLR_GRP_TYP='COG') AND DLR_LOC.IS_ACTV='Y' ) LOC_GRP
ON DLR_QLFY.QLFY_GRP_ID=LOC_GRP.LOC_GRP_ID
INNER JOIN (SELECT DLR_GRP.DLR_GRP_ID AS 
COG_GRP_ID,LEAD_DLR_LOC_ID,DLR_LOC.ACCT_NUM AS COG_ACCT_NUM FROM 
test.DLR_GRP,test.DLR_GRP_DLR_XREF,test.DLR_LOC WHERE 
DLR_GRP.DLR_GRP_ID=DLR_GRP_DLR_XREF.DLR_GRP_ID AND 
DLR_GRP_DLR_XREF.DLR_LOC_ID=DLR_LOC.DLR_LOC_ID AND DLR_GRP.DLR_GRP_TYP='COG' 
AND DLR_LOC.IS_ACTV='Y' ) COG_GRP
ON DLR_QLFY.COG_GRP_ID=COG_GRP.COG_GRP_ID
INNER JOIN test.DLR_LOC
ON DLR_LOC.ACCT_NUM=LOC_GRP.LOC_ACCT_NUM
AND  DLR_LOC.ACCT_NUM=COG_GRP.COG_ACCT_NUM
INNER JOIN test.DLR_LOC LEAD_LOC
ON LEAD_LOC.DLR_LOC_ID=COG_GRP.LEAD_DLR_LOC_ID
AND  LEAD_LOC.ACCT_NUM=LEAD_LOC.COG_PARNT_ACCT
INNER JOIN test.DLR_LOC COG_LEAD
ON COG_LEAD.DLR_LOC_ID=COG_GRP.LEAD_DLR_LOC_ID
INNER JOIN test.NMQ_REQ
ON DLR_QLFY.NMQ_REQ_ID=NMQ_REQ.NMQ_REQ_ID
INNER JOIN test.NEW_MDL
ON NMQ_REQ.NEW_MDL_ID = NEW_MDL.NEW_MDL_ID
INNER JOIN test.STG_ACFLX_NMQ_DLRS
ON COG_LEAD.ACCT_NUM=STG_ACFLX_NMQ_DLRS.RLTNP_LEAD_ACCT
AND  STG_ACFLX_NMQ_DLRS.ACCT_ID=DLR_LOC.ACCT_NUM

WHERE 
 DLR_LOC.IS_ACTV='Y'
AND  DLR_QLFY.QLF_FLG='N'
AND  NMQ_REQ.PGM_DSBL_FLG != 'Y'
AND  (NMQ_REQ.PGM_START_DT <= CURRENT_DATE
 AND NMQ_REQ.PGM_END_DT > CURRENT_DATE)
 
AND   DLR_QLFY.DLR_QLFY_ID NOT IN (SELECT DLR_QLFY.DLR_QLFY_ID FROM 
test.DLR_QLFY WHERE QLF_FLG='Y' AND DLR_QLFY.NMQ_REQ_ID=NMQ_REQ.NMQ_REQ_ID);


---

access plan




"HashAggregate  (cost=4538.33..4538.34 rows=1 width=27)"
"  Group Key: dlr_qlfy.dlr_qlfy_id, nmq_req.grace_prd, nmq_req.hide_prg_flg, 
nmq_req.ntfy_dlr_flg, dlr_loc.acct_num, nmq_req.nmq_req_id, new_mdl.pi_mdl_id"
"  ->  Nested Loop  (cost=3.59..4538.31 rows=1 width=27)"
"->  Nested Loop  (cost=3.31..4537.94 rows=1 width=27)"
"  ->  Nested Loop  (cost=3.03..4530.16 rows=1 width=15)"
"Join Filter: (lead_loc.dlr_loc_id = 
dlr_grp_1.lead_dlr_loc_id)"
"->  Nested Loop  (cost=0.58..1438.27 rows=263 width=15)"
"  ->  Nested Loop  (cost=0.29..1306.78 rows=169 
width=15)"
"->  Seq Scan on dlr_loc lead_loc  
(cost=0.00..757.12 rows=169 width=4)"
"  Filter: (acct_num = cog_parnt_acct)"
"->  Index Only Scan using "IDX_101" on dlr_loc 
cog_lead  (cost=0.29..3.24 rows=1 width=11)"
"  Index Cond: (dlr_loc_id = 
lead_loc.dlr_loc_id)"
"  ->  Index Scan using idx_14 on stg_acflx_nmq_dlrs  
(cost=0.29..0.63 rows=15 width=14)"
"Index Cond: (rltnp_lead_acct = 
cog_lead.acct_num)"
"->  Nested Loop  (cost=2.45..11.74 rows=1 width=33)"
"  ->  Index Only Scan using idx3 on dlr_grp dlr_grp_1  
(cost=0.29..0.32 rows=1 width=8)"
"Index Cond: ((lead_dlr_loc_id = 
cog_lead.dlr_loc_id) AND (dlr_grp_typ = 'COG'::bpchar))"
"  ->  Nested Loop  (cost=2.17..11.41 rows=1 width=37)"
"Join Filter: (dlr_loc_2.acct_num = 
dlr_loc.acct_num)"
"->  Nested Loop  (cost=0.58..0.77 rows=1 
width=11)"
"  ->  Index Only Scan using idx6 on 
dlr_loc dlr_loc_2  (cost=0.29..0.32 rows=1 width=11)"
"Index Cond: ((acct_num = 
stg_acflx_nmq_dlrs.acct_id) AND (is_actv = 'Y

Re: [PERFORM] query performance issue

2017-11-15 Thread Samir Magar
please find the EXPLAIN ANALYZE output.

On Wed, Nov 15, 2017 at 3:13 PM, Pavel Stehule 
wrote:

> Hi
>
> please send EXPLAIN ANALYZE output.
>
> Regards
>
> Pavel
>
> 2017-11-15 10:33 GMT+01:00 Samir Magar :
>
>> Hello,
>> I am having performance issues with one of the query.
>> The query is taking 39 min to fetch 3.5 mil records.
>>
>> I want to reduce that time to 15 mins.
>> could you please suggest something to its performance?
>>
>> server configuration:
>>  CPUs = 4
>> memory = 16 GM
>> shared_buffers = 3 GB
>> work_mem = 100MB
>> effective_cache_size = 12 GB
>>
>> we are doing the vacuum/analyze regularly on the database.
>>
>> attached is the query with its explain plan.
>>
>> Thanks,
>> Samir Magar
>>
>>
>> --
>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
>> )
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-performance
>>
>>
>
"HashAggregate  (cost=4459.68..4459.69 rows=1 width=27) (actual 
time=2890035.403..2892173.601 rows=3489861 loops=1)"
"  Group Key: dlr_qlfy.dlr_qlfy_id, nmq_req.grace_prd, nmq_req.hide_prg_flg, 
nmq_req.ntfy_dlr_flg, dlr_loc.acct_num, nmq_req.nmq_req_id, new_mdl.pi_mdl_id"
"  ->  Nested Loop  (cost=3.59..4459.67 rows=1 width=27) (actual 
time=0.228..2864594.177 rows=12321289 loops=1)"
"->  Nested Loop  (cost=3.31..4459.29 rows=1 width=27) (actual 
time=0.221..2819927.249 rows=12321289 loops=1)"
"  ->  Nested Loop  (cost=3.03..4451.45 rows=1 width=15) (actual 
time=0.158..36816.304 rows=12612983 loops=1)"
"Join Filter: (lead_loc.dlr_loc_id = 
dlr_grp_1.lead_dlr_loc_id)"
"->  Nested Loop  (cost=0.58..1358.94 rows=263 width=15) 
(actual time=0.046..363.150 rows=52261 loops=1)"
"  ->  Nested Loop  (cost=0.29..1227.46 rows=169 
width=15) (actual time=0.024..86.909 rows=12151 loops=1)"
"->  Seq Scan on dlr_loc lead_loc  
(cost=0.00..757.80 rows=169 width=4) (actual time=0.010..31.028 rows=12151 
loops=1)"
"  Filter: (acct_num = cog_parnt_acct)"
"  Rows Removed by Filter: 21593"
"->  Index Only Scan using "IDX_101" on dlr_loc 
cog_lead  (cost=0.29..2.77 rows=1 width=11) (actual time=0.003..0.004 rows=1 
loops=12151)"
"  Index Cond: (dlr_loc_id = 
lead_loc.dlr_loc_id)"
"  Heap Fetches: 0"
"  ->  Index Scan using idx_14 on stg_acflx_nmq_dlrs  
(cost=0.29..0.63 rows=15 width=14) (actual time=0.008..0.019 rows=4 
loops=12151)"
"Index Cond: (rltnp_lead_acct = 
cog_lead.acct_num)"
"->  Nested Loop  (cost=2.45..11.75 rows=1 width=33) 
(actual time=0.058..0.615 rows=241 loops=52261)"
"  ->  Index Only Scan using idx3 on dlr_grp dlr_grp_1  
(cost=0.29..0.32 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=52261)"
"Index Cond: ((lead_dlr_loc_id = 
cog_lead.dlr_loc_id) AND (dlr_grp_typ = 'COG'::bpchar))"
"Heap Fetches: 0"
"  ->  Nested Loop  (cost=2.17..11.42 rows=1 width=37) 
(actual time=0.051..0.530 rows=236 loops=53436)"
"Join Filter: (dlr_loc_2.acct_num = 
dlr_loc.acct_num)"
"->  Nested Loop  (cost=0.58..0.77 rows=1 
width=11) (actual time=0.015..0.016 rows=1 loops=53436)"
"  ->  Index Only Scan using idx6 on 
dlr_loc dlr_loc_2  (cost=0.29..0.32 rows=1 width=11) (actual time=0.009..0.009 
rows=1 loops=53436)"
"Index Cond: ((acct_num = 
stg_acflx_nmq_dlrs.acct_id) AND (is_actv = 'Y'::bpchar))"
"Heap Fetches: 0"
"  ->  Index Only Scan using idx7 on 
dlr_grp_dlr_xref dlr_grp_dlr_xref_1  (cost=0.29..0.43 rows=1 width=8) (actual 
time=0.004..0.005 rows=1 loops=53402)"
"Index Cond: ((dlr_loc_id = 
dlr_loc_2.dlr_loc_id) AND (dlr_grp_id = dlr_grp_1.dlr_grp_id))"
"Heap Fetches: 0"
"->  Nested Loop  (c