I used to fight this one all the time too: I was convinced that the harm
induced by paging-related VM I/O waits was killing machine performance.
It was, before I discovered VDISK units for paging. I was just about
always in Q3 waiting I/O response from one of my (then-) real paging
minidisks, even under conditions of very light memory & load stress.
I changed my paging units to VDISK, and now whenever I find myself in Q3,
it's for a better reason (waiting on TCP/IP, or even real disk I/O).

Someone posted a link to a PDF document produced by Red Hat with sysctl
tweaks for RHEL 3, which (despite not being even 50% relevant for me)
I'll admit was a very valuable discussion that shed some more light
on Linux paging & file caching for me, at least. But under VM, it doesn't
matter as much to me as it would if I were running LPAR. Here, I'd trust
the counsel of my VM guys before I'd start tweaking sysctls. Under LPAR,
I'd have no choice other than to tweak or suffer.

If you don't trust Linux to make page/fs cache decisions, try letting
VM do it through VDISK. I'd try this before I start risking the stability
of my system with radical sysctl tweaks. When I did it, I went all the
way:
I now have 1.1 GB page space on VDISK and zero on 3390-x, which backs 768
MB main memory. Current average load pressure says I'd be doing very
little
paging at about 1 GB main; but every now and again I get a runaway process

that changes the phrase "memory stress" to "paging frenzy", and the
machine
still recovers from frenzy-induced runqueue loads as high as 34.0 as long
as
the memory pig is found and slain before too much time goes by. My load
(interactive developers) sounds very different from yours; but if paging-
related waits are your problem, try VDISK. It might help you out, if your
VM
guys will let you use it. I love VM :-)

Peace,
--Jim--
James S. Tison
Senior Software Engineer
TPF Laboratory / Architecture
IBM Corporation
"If dogs don't go to heaven, then, when I die, I want to go where they
do."
    -- Will Rogers

Linux on 390 Port <[EMAIL PROTECTED]> wrote on 04/26/2004 09:30:31:
<snip>
> Current environment has 2 GB central, 0 expanded for VM (again, because
the
> decision was made after our last maint window or we would have moved
some
> memory to expanded). The VM guest in question  actually did NOT have 1.3
GB
> real, as I was told, but in fact 768 MB. The VM guy (who has about 2
weeks
> worth of experience with VM now) had the 1.3 GB on a diff guest.
> The Linux Guest being used for WebSphere also has a 843 MB physical disk
> swap volume. It is the Linux guest that is writing pages to the swap
> volume, not VM.  Our VM is not paging hardly at all, except a smidgin at
> guest IPL. Then it gives it back.
<snip>
> Anyway, I'd like to see the amount of memory being used for buffers
> reduced, since this system doesn't have much except the Java code and
the
> WebSphere configuration/deployment stuff being accessed from local disk.
> All of the data for the applications is sitting on DB2 being accessed by
> the DB2 Connect Linux Client.
>
<snip>

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to