Matthew Harrell <[EMAIL PROTECTED]> wrote:
> procs memory swap io system cpu
> r b w swpd free buff cache si so bi bo in cs us sy id
>10 0 0 1308 3888 5856 54780 0 0 17 150 427 2163 20 76 4
>
> procs memory swap io system cpu
> r b w swpd free buff cache si so bi bo in cs us sy id
> 3 1 0 588 4788 6768 174512 0 0 56 231 602 819 20 37 43
The first system seems to be spending a lot of CPU in the kernel (the
sy column) and is nearly CPU bound. This is probably due to the high
rate of context switches (the cs column)
The second system spends half as much time in the CPU, and has CPU to
spare (the id column). The difference seems to be the context switch
rate which is less than half (819 vs 2163, in this case).
If these are similar systems doing similar workloads, there's
something "wrong" with the first system. The difference between the
vmstat output formats implies that they're running different OS revs,
which could be enough to explain the variance.
Neither is swapping significantly, so memory doesn't seem to be the
bottleneck.
Next step is to run iostat or equivalent during a peak period.
-Dave