Long forgotten thread...
Turns out it is ohci-hcd USB driver blasting insane amount of
interrupts that is driving the load average up.
# grep ohci /proc/interrupts
169: 294912182 612411557 812332153 58723016 IO-APIC-level
ohci_hcd, ohci_hcd
The temporary fix obviously was 'rmmod
--Boundary_(ID_5yi3se0DcrgkRFK8ilXa7Q)
Content-type: text/plain; charset=ISO-8859-1; format=flowed
Content-transfer-encoding: 7BIT
There could be disk and/or RAID problems affecting disk I/O, wich could
lead to higher than normal load averages.
Henry
Michael Green wrote:
I have 18
Try to look for processes which are in zombie (defunct) state.
If I'm not mistaken, for some reason they tend to be counted when kernel
calculates the load average.
Michael Green wrote:
I have 18 identical Sun Fire X4100 systems here all configured
identically:
4-way Opteron, 4G RAM, 70G SAS
Oren Held wrote:
Try to look for processes which are in zombie (defunct) state.
If I'm not mistaken, for some reason they tend to be counted when
kernel calculates the load average.
No, the list of zombie processes is available at the list Michael gave,
and it's empty. Besides, they are not
On 10/07/06, Michael Green [EMAIL PROTECTED] wrote:
time the symptom returns. Typically the load average reaches 3 and
wouldn't go beyond that. How would you approach such a problem?
Have you checked the system logs?
--Amos
=
To
So I had to reboot my (330 day uptime...) machine. I couldn't even
cleanly shut it down, because the NFS unmount never finished.
RH Magazine suggests some trick with rpciod, but I haven't tried it yet.
http://www.redhat.com/magazine/005mar05/departments/tips_tricks/
On Wed, Mar 30, 2005, Nadav Har'El wrote about Re: high load ?:
Unfortunately, crappy NFS implementations and the like are notorious for
leaving processes in the D state for very long times. This has much worse
It's funny - I was just bit today by this problem :(
Today, my work machine stopped
On Tue, Apr 05, 2005, Tzafrir Cohen wrote about Re: high load ?:
So I had to reboot my (330 day uptime...) machine. I couldn't even cleanly
shut it down, because the NFS unmount never finished.
umount -f? umount -l
Umount -f didn't help (it simply hanged). I didn't think of trying umount
On Fri, Apr 01, 2005 at 03:35:36AM +0300, guy keren wrote:
to hold a semaphore, or because we're already holding some other
semaphore?
good point, sorry for not being clear. To sleep on a semaphore while
waiting to acquire it. Specifically - see
arch/i386/kernel/semaphore.c, __down(), which
On Wed, 30 Mar 2005, shimi wrote:
Generally, updating a big table all the time is a BAD IDEA, and should
_never_ be done. The main question is: is it requirable that the table
be up-to-date according to the last INSERT/UPDATE you just did, or that
you just want it to be updated some when, as long
On Wed, 30 Mar 2005, Muli Ben-Yehuda wrote:
On Wed, Mar 30, 2005 at 05:19:23PM +0200, guy keren wrote:
since when does 'D' state means a process is holding a lock? there are
many situations in the kernel that processes are put to sleep while not
holding any lock (at least as far as i saw
On Wed, Mar 30, 2005 at 09:25:29AM +0200, Shachar Shemesh wrote:
Processes should never spend too much time in the D state. The very
fact that certain activities mean you are almost guaranteed to see
processes in the D state means there are bugs in the kernel.
Why do you think so? D means
On Wed, Mar 30, 2005 at 09:25:29AM +0200, Shachar Shemesh wrote:
guy keren wrote:
[snip]
Processes should never spend too much time in the D state. The very
fact that certain activities mean you are almost guaranteed to see
processes in the D state means there are bugs in the kernel.
Muli Ben-Yehuda wrote:
On Wed, Mar 30, 2005 at 09:25:29AM +0200, Shachar Shemesh wrote:
Processes should never spend too much time in the D state. The very
fact that certain activities mean you are almost guaranteed to see
processes in the D state means there are bugs in the kernel.
Why
Yedidyah Bar-David wrote:
You may have to tweak the numbers a bit, but it seems about right. A
different question is whether, under this scenario, the load average is
still the right metric to look at? I think it is. If the load average is
2, my shell still have quite a queue to wait for being
Firs of all, thanks for the responses.
To get you more details to chew on.
We found the problem and solved it but I would be glad to see how other
would attack the problem with this extra information:
Basically on every hit the database write a row in a table in MySQL.
The server gets about 5
On Wed, Mar 30, 2005 at 10:36:07AM +0200, Shachar Shemesh wrote:
Yedidyah Bar-David wrote:
You may have to tweak the numbers a bit, but it seems about right. A
different question is whether, under this scenario, the load average is
still the right metric to look at? I think it is. If the
Gabor Szabo wrote:
Firs of all, thanks for the responses.
To get you more details to chew on.
We found the problem and solved it but I would be glad to see how other
would attack the problem with this extra information:
Basically on every hit the database write a row in a table in MySQL.
The
On Wednesday 30 March 2005 10:18, Muli Ben-Yehuda wrote:
Why do you think so? D means that a process is holding a lock. Do you
mean bugs in the sense of long lock holding times?
Even worse, I have seen too many occasions when long actually
was unbounded. When kernel code does uninterruptible
On Wed, Mar 30, 2005, Yedidyah Bar-David wrote about Re: high load ?:
This is so only if you have load only on the CPU. If, for example, you
only have one process running, but which does a lot of paging, your
load average will be =1, but the responsiveness will be quite bad,
as your shell
On Wed, 30 Mar 2005, Muli Ben-Yehuda wrote:
On Wed, Mar 30, 2005 at 09:25:29AM +0200, Shachar Shemesh wrote:
Processes should never spend too much time in the D state. The very
fact that certain activities mean you are almost guaranteed to see
processes in the D state means there are
--=-WHb7yYBfPjIjPgE5Ple/
Content-Type: text/plain
Content-Transfer-Encoding: 7bit
On Wed, 2005-03-30 at 10:42 +0200, Gabor Szabo wrote:
Firs of all, thanks for the responses.
To get you more details to chew on.
We found the problem and solved it but I would be glad to see how other
would
On Mon, 28 Mar 2005, Oron Peled wrote:
The load average is not in percentage. The load average numbers are the
average number of processes waiting/using for CPU in the last 1, 5
and 15 minutes [remark: on Linux processes in the 'D' (uninterruptible
sleep) are weirdly in this count also].
guy keren wrote:
On Mon, 28 Mar 2005, Oron Peled wrote:
The load average is not in percentage. The load average numbers are the
average number of processes waiting/using for CPU in the last 1, 5
and 15 minutes [remark: on Linux processes in the 'D' (uninterruptible
sleep) are weirdly in this
On Monday 28 March 2005 11:03, Gabor Szabo wrote:
One of the things which is not clear to me is that it seems there is a
total lack of connection between the average load and the CPU states.
Could someone explain why that would be ?
The load average is not in percentage. The load average
On Mon, Oct 20, 2003 at 08:58:19PM +0200, WildLove - Elad Almadoi wrote:
Hey!
[snip]
Here's my 'top' command:
8:56pm up 3 days, 23:41, 1 user, load average: 27.03, 19.53, 15.26
426 processes: 395 sleeping, 3 running, 28 zombie, 0 stopped
CPU0 states: 8.8% user, 11.2% system, 0.0% nice,
Hi,
AFAIK:
The load average is the number of tasks in your CPU's input queue. Since
you have more than 1 CPU, the system tries to break each job it has to as
many task it can, so each will be processed in a different CPU.
Meaning: you should have a hieghier load average. I think the results are
: Monday, October 20, 2003 10:15 PM
Subject: Re: High load average on IBM xSeries 335
Hi,
AFAIK:
The load average is the number of tasks in your CPU's input queue. Since
you have more than 1 CPU, the system tries to break each job it has to as
many task it can, so each will be processed
Hi
What kernel are you using ? is it the latest from RedHat ?
Have you tried disabling HT in the Bios?
I know some kernels are having problems with it .
Later then
Erez
- Original Message -
From: WildLove - Elad Almadoi [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
On Mon, Oct 20, 2003 at 08:58:19PM +0200, WildLove - Elad Almadoi wrote:
Hey!
I'm runing IBM xSeries 335 (aka IBM x335) with:
IBM x335 rackmount (xSeries)
CPU: Dual Intel Xeon 2.6GHz (533MHz)
Hardrives: IBM 36.4GB 10K-rpm Ultra160 SCSI Hot-Swap x 2 (1 is connected as
a mirror (raid-1) for
1. The load average is 1,5,15 minutes.
2. You're right. the memory is almost all used. I think a restart for the
services (Apache+MySQL), might do some good at freeing some memory.
3. Swapping can also explain about busy I/O are slow response to disk
commands (like ls and friends). I think this
On Tue, Oct 21, 2003 at 12:14:51AM +0200, Lior Kaplan wrote:
1. The load average is 1,5,15 minutes.
2. You're right. the memory is almost all used. I think a restart for the
services (Apache+MySQL), might do some good at freeing some memory.
Unless this is some runaway process which won't be
, when there's almost none usage of the apache and
the SQL, it's also dropped down to 0.*
It there any kind of config in the apache may cause it?
- Original Message -
From: Tzafrir Cohen [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 21, 2003 4:33 AM
Subject: Re: High load
33 matches
Mail list logo