It's a bit of a balancing act. Running zLinux lean and mean also opens up the 
possibility of extensive swapping under unusual conditions. With WebSphere and 
other big Java applications, it's almost a given. Not a big issue when it's 
only a couple servers, but in large 24x7 server pools, the VDISK memory 
consumption could end up being a pretty big chunk. Some smart automation (with 
format?)  run on a scheduled basis could keep things under control. If swap 
space is really big, then doing that operation on individual disks over timed 
intervals could minimize the impact on the rest of the LPAR. 

As Marcy says, maintenance forces a reboot eventually, but I can envision 
situations where rebooting (the immediate solution) just to reset swap space 
would cause eyes to roll among certain groups. And really, if we have to trim 
server memory to minimize file caching, then the swapping facility should be on 
the same page, so to speak. Conversely if Linux file caching can be turned off 
or restricted, then there might not be a need for swapping at all.  

Ray    


-----Original Message-----
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Marcy 
Cortes
Sent: Wednesday, December 09, 2015 6:06 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Status of VDISK after swap space usage

This is interesting.
Having 2TB of page space in test/dev I'm sure we have some potential space 
savings if this could be done across all dev/test servers...

However, you'd really want to do this carefully and it might not even be 
practical if linux itself didn't drive this diag.
If done outside (ala Rob's cmmflush), you'd need to swapoff the disk so that 
you first make linux clean it up and to prevent it from putting more on there. 
Swapoff of a full 2G disk takes a very long time (more than 30 minutes IIRC).  
It probably drives up the page rate while that is going on as well.
And if you've done the swapoff, no big deal to just add a vary off, detach, 
define, mkswap, swapon to your script in lieu of a diag to do it.
Probably would need to stagger this activity if you have a lot of servers on a 
single box.
As often as we have to reboot or IPL VM for security patching, HW work, and 
other maintenance anymore, here anyway it gets cleaned up fairly often.

So I can see why VM development would reject it since they'd really need Linux 
development to make use of it and Linux development might not be that 
interested if it's never going to be accepted into the kernel since the x side 
doesn't swap well anyway and adding cheap memory is the solution there.

Actually, maybe it is here too.  Making the vdisks smaller and memory size 
bigger and running CMM often might accomplish what is desired?

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to