On Tue, 30 Oct 2007 06:19:33 am Marcy Cortes wrote:
> I'm not sure it is working as designed.

I never said it was a good design -- and perhaps I should have read your
earlier messages prior to saying that. :)  It does depend on your point of
view though -- it's another one of these aspects that belies Linux's
single-system non-resource-sharing heritage.  In a non-shared environment,
keeping swap pages hanging around on disk is a good design point in that it
can realistically save costly I/O.  It's not so good for us though.  :)

> Eventually, when we use up our
> swap, WAS crashes OOM (that's *our* real issue, at least our biggest one
> anyway :).

Yes... and that's not going to be solved by CMM or creating different swap
VDISKs or anything like that.  The earlier hints about JVM heap size and
garbage collection and so on will be useful here.  I guess the application is
being checked for leaks as well -- or do your developers write perfect code
first-time-every-time too? ;-P

> But if we are able to swapoff/swapon and recover that space
> without crashing WAS that kind a says to me that it didn't need it
> anyway - course I haven't tried that whilst workload was running
> through...  Maybe it is destructive.

It might be, but as long as your Linux has more free virtual memory than the
amount of pages in use on the device you want to remove, you *should* be able
to do a swapoff without impact (things might get a little sluggish for a few
seconds while kswapd shuffles things around though).  It would be nice to be
able to tell accurately just how much swap space is being used on a
device -- /proc/meminfo is system-wide.  SwapCached in /proc/meminfo is a
helpful indicator that counts the swap space "hanging around" (you could try
http://www.linuxweblog.com/meminfo among heaps of other places for more info
about what the numbers from meminfo mean); if this number is low compared to
your total available swap then you're not likely to get much benefit from
swapoff/swapon cycles.

> We plan to experiment some with the vm.swapiness and see if that helps.
> I guess in the very least, we can add enough vdisks and enough VM paging
> packs to get through week without a recycle until we figure this out as
> long as response time & cpu savings remain this good with 6.1.

Good plan, although vm.swappiness is only likely to delay your swap usage
rather than eliminate it entirely (if something is asking for that much
memory, at some point it's going to have to get it from somewhere).  Of
course If it delays heavy swapping long enough to get you through the week
then that's a win.

While you've got this WAS issue you are *possibly* justified in throwing a
DASD swap device at the end of your line of VDISKs (I emphasise possibly
because I don't want to offend Rob et al too much).  Perhaps the last thing
you want would be to just keep adding VDISKs and VM page packs until your VM
paging system is consumed by leaked Linux memory.  You could do a nightly
swapoff/swapon of some of the VDISKs to flush things out and reduce the
activity to the DASD swap.  I guess what I'm saying is that you could think
about this WAS problem as an abberation rather than the normal operating mode
for your system -- don't jeopardise your entire environment for the sake of
one problem system, and be prepared to let best-practice slide a bit while
you get the issue sorted.  Of course you're in a much better position than me
to decide if your paging environment needs such protection.

I also transposed my client's problem onto your shop -- I thought you were
concerned about the number of pages allocated to VDISKs.  That's why I
mentioned the stuff about DELETE/DEFINE of your VDISK swaps.

Best of luck with the issue!

Cheerio,
Vic Cross

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to