This seems related to issue http://jira.codehaus.org/browse/MRM-1457,
which was experienced in 1.3.3.

On Wed, Mar 2, 2011 at 10:25 AM, Deng Ching <och...@apache.org> wrote:
> Was it scanning a repository when the CPU usage spiked up?
>
> IIRC, there were improvements regarding performance included in 1.3.3.
>
> Thanks,
> Deng
>
> On Wed, Mar 2, 2011 at 2:17 AM, Michael March <mma...@gmail.com> wrote:
>> In the past  few months our 1.2.1 server (running on Centos 5.5 / JDK
>> 6) would, after an hour or two of 'normal' operation, spike the CPU
>> load to 10+.
>> Recently we upgraded to 1.3.4 and our serveris running a little
>> smoother but then after a few hours it will go back to 10+ again.
>> Here's a recent output of "top":
>>
>> top - 09:08:43 up 1 day, 15:28,  1 user,  load average: 13.52, 12.05, 11.56
>> Tasks: 105 total,  10 running,  95 sleeping,   0 stopped,   0 zombie
>> Cpu(s): 56.3%us,  6.3%sy,  0.1%ni, 34.5%id,  0.4%wa,  0.0%hi,  2.3%si,  
>> 0.2%st
>> Mem:   2097152k total,  2035216k used,    61936k free,    51448k buffers
>> Swap:   524280k total,      136k used,   524144k free,   766424k cached
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>
>> 27233 tomcat    18   0 1241m 906m 9716 S 61.5 44.3 953:33.33 java
>>
>> 23841 postgres  15   0  152m  16m  13m R 31.5  0.8  37:44.30 postmaster
>> 27269 postgres  15   0  153m  31m  28m S 16.5  1.6 425:21.18 postmaster
>> 23783 postgres  15   0  152m  25m  23m S 13.5  1.3  38:24.97 postmaster
>> 27268 postgres  15   0  152m  35m  32m S 12.0  1.7 189:07.95 postmaster
>> 18653 postgres  16   0  152m  29m  26m R  4.5  1.4 120:31.79 postmaster
>> 27668 postgres  16   0  152m  28m  26m R  4.5  1.4 199:01.50 postmaster
>>  9560 root      35  19  260m  23m 6844 R  3.0  1.1   0:00.92 yum-updatesd-he
>> 23274 postgres  15   0  152m  25m  23m S  1.5  1.3  38:29.17 postmaster
>>     1 root      15   0 10348  692  580 S  0.0  0.0   0:00.37 init
>>     2 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 migration/0
>>     3 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/0
>>     4 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/0
>>     5 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 events/0
>>     6 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 khelper
>>     7 root      12  -5     0    0    0 S  0.0  0.0   0:00.00 kthread
>>     9 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 xenwatch
>>
>>
>> ... this is after a few hours of operation and once it hits this high
>> load it never comes down.
>>
>> Is there anything I should be looking at to fix this?
>>
>>
>> --
>> <cowmix>
>>
>

Reply via email to