We did see a small spike in disk I/O, but only had wait I/O for less than 10
seconds total. The low CPU idle event goes on for several minutes, wait I/O or
heavier I/O does not correlate to the extended period.
System time does jump up at the same time as the user time. System times of 15%
when CPU us at 60% (25% idle) is around the average for this test. We believe
that jump is related to showing time spent getting processes on and off CPU to
execute. No general system wait I/O is observed during this time.
Five second samples of the Fiber disk SAN volumes. Wait I/O is listed in red.
Note it's ten seconds or less.
extended device statistics
r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
1.0 296.90.0 136.5 0.0 62.30.0 209.0 0 168 c3
0.0 11.40.00.1 0.0 0.50.0 41.6 0 30
c3t60A98000572D4275684A563761586D71d0
0.4 28.00.00.9 0.0 3.00.0 104.4 0 41
c3t60A98000572D4275684A5638364D644Ed0
0.6 257.50.0 135.6 0.0 58.80.0 227.9 0 98
c3t60A98000572D4275684A56385468434Fd0
extended device statistics
r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
13.8 721.20.1 133.9 0.0 75.00.0 102.0 0 200 c3
0.0 88.80.06.0 0.0 19.00.0 213.7 0 65
c3t60A98000572D4275684A563761586D71d0
2.4 86.60.01.2 0.0 1.60.0 18.0 0 39
c3t60A98000572D4275684A5638364D644Ed0
11.4 545.80.1 126.7 0.0 54.40.0 97.7 0 97
c3t60A98000572D4275684A56385468434Fd0
extended device statistics
r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
3.6 769.00.0 123.2 29.4 182.9 38.1 236.7 5 220 c3
0.0 104.20.01.4 0.0 34.30.0 329.0 0 46
c3t60A98000572D4275684A563761586D71d0
1.0 77.00.0 13.1 0.0 8.10.0 103.2 0 77
c3t60A98000572D4275684A5638364D644Ed0
2.6 587.80.0 108.8 29.4 140.5 49.9 238.0 41 98
c3t60A98000572D4275684A56385468434Fd0
extended device statistics
r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
9.4 761.20.1 133.1 3.3 122.64.3 159.1 1 196 c3
0.0 33.80.00.3 0.0 2.10.0 63.5 0 30
c3t60A98000572D4275684A563761586D71d0
7.4 94.80.11.8 0.0 16.20.0 158.6 0 66
c3t60A98000572D4275684A5638364D644Ed0
2.0 632.60.0 131.0 3.3 104.35.2 164.3 10 99
c3t60A98000572D4275684A56385468434Fd0
extended device statistics
r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
2.8 588.20.0 126.0 0.0 112.60.0 190.5 0 239 c3
0.0 25.00.00.2 0.0 1.80.0 72.3 0 52
c3t60A98000572D4275684A563761586D71d0
0.0 157.40.0 12.0 0.0 10.70.2 68.0 0 87
c3t60A98000572D4275684A5638364D644Ed0
2.8 405.80.0 113.8 0.0 100.10.0 244.9 0 100
c3t60A98000572D4275684A56385468434Fd0
Thanks!
Deb
From: Rajesh Kumar Mallah [mailto:mallah.raj...@gmail.com]
Sent: Thursday, July 01, 2010 2:50 AM
To: Deborah Fuentes
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Extremely high CPU usage when building tables
Hi,
1. Did you also check vmstat output , from sar output the i/o wait is not clear.
2. i gues you must be populating the database between creating tables and
creating
indexes. creating indexes require sorting of data that may be cpu
intensive, loading/populating
the data may saturate the i/o bandwidth . I think you should check when
the max cpu utilisation
is taking place exactly.
regds
Rajesh Kumar Mallah.
On Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes
dfuen...@eldocomp.commailto:dfuen...@eldocomp.com wrote:
Hello,
When I run an SQL to create new tables and indexes is when Postgres consumes
all CPU and impacts other users on the server.
We are running Postgres 8.3.7 on a Sun M5000 with 2 x quad core CPUs (16
threads) running Solaris 10.
I've attached the sar data at the time of the run- here's a snip-it below.
Any ideas would be greatly appreciated.
Thanks!
Deb
Here, note the run queue, the left column. That is the number of processes
waiting to run. 97 processes waiting to run at any time with only eight CPU
cores looks very busy.
r...@core2 # sar -q 5 500
SunOS core2 5.10 Generic_142900-11 sun4u06/17/2010
12:01:50 runq-sz %runocc swpq-sz %swpocc
12:01:55 1.8 80 0.0 0
12:02:00 1.0 20 0.0 0
12:02:05 1.0 20 0.0 0
12:02:10 0.0 0 0.0 0
12:02:15 0.0 0 0.0 0
12:02:21 3.3 50 0.0 0
12:02:26 1.0 20 0.0 0
12:02:31 1.0 60 0.0 0
12:02:36 1.0 20 0.0 0
12:02:4227.0 50 0.0 0
12:02:4932.8 83 0.0 0
12:02:55