> -----Original Message-----
> From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On 
> Behalf Of Rob van der Heij

> The idea is to give Linux more resources when there is more 
> available, so the server can take advantage of that.
> Without such a mechanism, you can't do much better than make 
> each Linux pretty small (so effectively do what you suggest: 
> leave only limited space to cache data). This works best when 
> you run one application per server.

The problem is, that VMRM/CMM doesn't work like that: VMRM only
indirectly give Linux more resources - it's fundament approach is to
take them away, when the system is constrained. 

Our current zLinux workload is not very CPU intensive and memory
constraints are our major concern (because of the price). Over
committing memory seems to be necessary in order to minimize costs. With
that as a starting point, I would certainly go a long way to avoid that
"Linux file cache" end up on VM paging disks. So yes, I only want to
leave limited space for cache data. 

The effective VM page in rate __per guest__ is typically below 5 MB/sec,
because it takes a couple of milliseconds (I/O delay + interrupt time)
every single time a page fault occurs. I would certainly prefer to
reaccess the file from LVM-striped DASD with much higher throughput. 

What I suggest with cpuplugd is basically to let Linux (in your own
words) "behave social".
 
> For some applications, using cmm_pages in a kind of scheduled 
> way (eg squeeze during day shift and let air out for nightly 
> backups) may work well as a manual compromise.

If our current guests were running something newer than SLES9, I would
certainly consider to schedule a cron job to "echo 1 >
/proc/sys/vm/drop_caches". The nice thing with cpuplugd, on the other
hand, is that it can adjust dynamically during the day - it doesn't have
to be scheduled.  
 
> The problem imho is that we don't have metrics in Linux to 
> decide which data in page cache is "luxury" storage. The fact 
> that the page is also on disk does not imply that you don't 
> need it in memory as well (eg program code, data swapped in 
> again, shared libraries, mmapped files, shared process 
> memory). So some workloads will not show high swap rates when 
> you squeeze it too hard.

Generally I want to avoid my Linux guests swapping... Not even on VDISK,
which takes up memory better utilized as main storage. And I do guess,
that the LRU algorithms will help Linux some of the way, when it decides
which pages to keep in memory. 

> When you combine the Linux metrics and VM metrics in a single 
> place, it is often easier to see what is happening and where 
> you have excess memory. Especially when you retain 
> performance history information and can see the growth over time.

I guess that's somewhat CMM2 does... Unfortunately I have not figured
out how to test CMM2 in a larger scale in our environment. But I'm
certainly looking forwarded to try it out. 

Does Velocity have a product, which combines Linux, VM metrics, and
CMM1? - A more sophisticated version of VMRM/CMM?

Best,
Klaus
__________________________________________________________________________________________
KMD A/S, Lautrupparken 40-42, DK-2750 Ballerup, CVR-nr. 26911745 

KMD er medlem af IT-Branchen og Dansk Erhverv samt anmeldt til Datatilsynet som 
edb-servicevirksomhed. KMD er certificeret i henhold til ISO 9001:2000, med 
Dansk Standard som certificerende organ og er desuden Microsoft Gold Certified 
Partner og Certificeret SAP Hosting Center.

www.kmd.dk   www.kundenet.kmd.dk   www.e-Boks.dk    www.organisator.dk   
www.kmdinternational.com

Hvis du har modtaget denne e-mail ved en fejl, bedes du venligst give mig 
besked herom og slette den.
If you received this e-mail by mistake, please notify me and delete it. Thank 
you.

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to