Re: [PERFORM] Hot Standby performance issue

2013-11-02 Thread Tomas Vondra
On 28.10.2013 21:23, sparikh wrote:
> Hi,
> 
> Yes, you are right. The table is the biggest one . Please find below the
> information you requested. I agree the fact that autovacuum ran on this
> table would fix the performance issue on standby does not sound very
> convincing. But that is the only thing I could correlate when the query on
> standby started working again. Otherwise there is absolutely no changes at
> code level , database level or OS level.
> As of now query is still working fine on standby.
> 
> I may be wrong, but could it be the case that standby disk was too much
> fragmented compare to primary and autovaccum on primary fixed that.
> (Assuming autovacuum on primary internally triggers the same on standby)

I find it very unlikely, but you didn't gave us necessary data (say, how
much free space was on the disks, etc.). The best way to pinpoint the
issue would be to run some profiler (which we have repeatedly asked you
to do), but now that the issue disappeared we can only guess.

Please monitor the system and if it happens again run perf or other
profiler so that we know where the time is spent.

Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hot Standby performance issue

2013-11-02 Thread Tomas Vondra
On 28.10.2013 21:57, sparikh wrote:
> Table statistics I sent before were from primary. Following are from standby.
> 
> Index Tuples Fetched  25910277
> Tuples Inserted   0   
> Tuples Updated0   
> Tuples Deleted0   
> Tuples HOT Updated0   
> Live Tuples   0   
> Dead Tuples   0   
> Heap Blocks Read  138482386   
> Heap Blocks Hit   1059169445  
> Index Blocks Read 4730561 
> Index Blocks Hit  9702556 
> Toast Blocks Read 1165
> Toast Blocks Hit  82  
> Toast Index Blocks Read   85  
> Toast Index Blocks Hit3055
> Last Vacuum   
> Last Autovacuum   
> Last Analyze  
> Last Autoanalyze  
> Vacuum counter0   
> Autovacuum counter0   
> Analyze counter   0   
> Autoanalyze counter   0   
> Table Size46 GB   
> Toast Table Size  615 MB  
> Indexes Size  20 GB

Why have you skipped some of the rows posted for primary? E.g. the
sequential scans info?

Anyway, I think new data are not going to help us as the issue resolved
somehow, so the current data are unlikely to show the original cause.

You can either wait whether it happens again, or dig in the logs to see
if / why the autovacuum was not running on this table.

Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] postgresql recommendation memory

2013-11-02 Thread Евгений Селявка
Please help with advice!

Server
HP ProLiant BL460c G1

Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):8
On-line CPU(s) list:   0-7
Thread(s) per core:1
Core(s) per socket:4
CPU socket(s): 2
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 23
Stepping:  6
CPU MHz:   3000.105
BogoMIPS:  6000.04
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  6144K
NUMA node0 CPU(s): 0-7

32GB RAM
[root@db3 ~]# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 32765 MB
node 0 free: 317 MB
node distances:
node   0
  0:  10


RAID1 2x146GB 10k rpm

CentOS release 6.3 (Final)
Linux 2.6.32-279.11.1.el6.x86_64 #1 SMP x86_64 GNU/Linux


kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
vm.swappiness = 30
vm.dirty_background_bytes = 67108864
vm.dirty_bytes = 536870912


PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6
20120305 (Red Hat 4.4.6-4), 64-bit

listen_addresses = '*'
port = 5433
max_connections = 350
shared_buffers = 8GB
temp_buffers = 64MB
max_prepared_transactions = 350
work_mem = 256MB
maintenance_work_mem = 1GB
max_stack_depth = 4MB
max_files_per_process = 5000
effective_io_concurrency = 2
wal_level = hot_standby
synchronous_commit = off
checkpoint_segments = 64
checkpoint_timeout = 15min
checkpoint_completion_target = 0.75
max_wal_senders = 4
wal_sender_delay = 100ms
wal_keep_segments = 128
random_page_cost = 3.0
effective_cache_size = 18GB
autovacuum = on
autovacuum_max_workers = 5
autovacuum_vacuum_threshold = 900
autovacuum_analyze_threshold = 350
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05
log_min_duration_statement = 500
deadlock_timeout = 1s


DB size is about 20GB. There is no high write activity on DB. But
periodically in postgresql log i see for example: "select 1" duration is
about 500-1000 ms.

In this period of time response time from db terribly. This period of time not
bound with high traffic. It is not other app on the server. There is not
specific cron job on server.

Our app written on java and use jdbc to connect to DB and internal pooling.
There is about 100 connection to DB. This is sar output:

12:00:01 AM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s
pgscand/s pgsteal/s%vmeff
09:30:01 PM 73.17302.72 134790.16  0.00  46809.73
0.00  0.00  0.00  0.00
09:35:01 PM 63.42655.80 131740.74  0.00  46182.74
0.00  0.00  0.00  0.00
09:40:01 PM 76.87400.62 122375.34  0.00  42096.27
0.00  0.00  0.00  0.00
09:45:01 PM 58.49198.33 121922.86  0.00  42765.27
0.00  0.00  0.00  0.00
09:50:01 PM 52.21485.45 136775.65  0.15  49098.65
0.00  0.00  0.00  0.00
09:55:01 PM 49.68476.75 130159.24  0.00  45192.54
0.00  0.00  0.00  0.00
10:00:01 PM 41.35295.34 118655.80  0.00  40786.52
0.00  0.00  0.00  0.00
10:05:01 PM 60.84593.85 129890.83  0.00  44170.92
0.00  0.00  0.00  0.00
10:10:01 PM 52.08471.36 132773.63  0.00  46019.13
0.00  2.41  2.41100.00
10:15:01 PM 73.93196.50 129384.21  0.33  45255.76
65.92  1.19 66.87 99.64
10:20:02 PM 70.35473.16 121940.38  0.11  44061.52 81.95
37.79119.42 99.73
10:25:01 PM 57.84471.69 130583.33  0.01  46093.33
0.00  0.00  0.00  0.00
10:30:01 PM 52.91321.62 119264.34  0.01  41748.19
0.00  0.00  0.00  0.00
10:35:01 PM 47.13451.78 114625.62  0.02  40600.98
0.00  0.00  0.00  0.00
10:40:01 PM 48.96472.41 102352.79  0.00  35402.17
0.00  0.00  0.00  0.00
10:45:01 PM 70.07321.33 121423.02  0.00  43052.04
0.00  0.00  0.00  0.00
10:50:01 PM 46.78479.95 128938.09  0.02  37864.07116.64
48.97165.07 99.67
10:55:02 PM104.84453.55 109189.98  0.00  37583.50
0.00  0.00  0.00  0.00
11:00:01 PM 46.23248.75 107313.26  0.00  37278.10
0.00  0.00  0.00  0.00
11:05:01 PM 44.28446.41 115598.61  0.01  40070.61
0.00  0.00  0.00  0.00
11:10:01 PM 38.86457.32 100240.71  0.00  34407.29
0.00  0.00  0.00  0.00
11:15:01 PM 48.23275.60 104780.84  0.00  36183.84
0.00  0.00  0.00  0.00
11:20:01 PM 92.74432.49 114698.74  0.01  40413.14
0.00  0.00  0.00  0.00
11:25:01 PM 42.76428.50  87769.28  0.00  29379.87
0.00  0.00  0.00  0.00
11:30:01 PM 36.83260.34  85072.46  0.00  28234.50
0.00  0.00  0.00  0.00
11:35:01 PM 62.52481.56  931