Within a virtual environment...  overcommitment is NOT a good idea.
Memory-wise, you would (I hope) prefer to have the memory footprints all
fit within the physical memory of the server (with some for the hypervisor,
of course) which should, one would hope, avoid double-paging.

Alternatively-- not that this works out as well w/ Linux versus older
versions of Unix-- you'd overcommit memory but have NO paging space for
each instance and let the hypervisor sort out paging, itself.
Unfortunately, Linux sees under-utilized memory as a wonderful place to
cache file contents (the "buffer cache") which was a tunable size on older
Unix versions (SCO Unix, etc, etc, etc).

The real answer, when it comes to memory, is "It depends".

If you have performance requirements-- and adequate physical memory-- don't
allow any paging... though this really requires a lot of testing and tuning
to make sure your application does not run against the wall (which means a
LOT of tuning of Java instances to ensure the heap memory is restrained
while also paying attention to unshared "classes";  IIRC any classes pulled
in by a JRE are NOT in a sharable code segment).

Like has been said before...   it all depends.

Consider other resources that get shared... like network connections.  In
an LPAR you get your own Ethernet interface, in a virtualized environment,
you're sharing a physical connection (even if it allows you to extend VLANs
into the box) and this imposes a potential choke-point (one reason database
servers may be unwise to place within a vmware instance).  CPU allocation,
of course, is another place you may not be eager to overcommit, either,
leaving enough cycles for the hypervisor to use for everyone.

Where transaction performance is not as critical-- like development
(compiling and functional testing)-- overcommitment isn't as much of a
problem (though I've seen an environment with a 32:1 overcommitment of CPU
and 16:1 memory...  to me, a guarantee of thrashing unless only one
instance is doing "real work").  In any kind of system testing-- much less
performance testing-- you'll want to reflect a "target" production
environment as best you can to avoid timing-induced anomalies.

Yeah, I know, the above is not particularly helpful.

Remember, it all depends.  This is an emergent rather than a deterministic
process.

-soup

On Wed, Apr 6, 2016 at 10:41 AM, Levy, Alan <al...@doitt.nyc.gov> wrote:

> Hopefully someone can resolve an argument that I'm having with a colleague.
>
> We are competing with the distributed side which using VMware to create
> sles linux servers. They create servers with 8G of memory and 8G of swap
> for EVERY server. My colleague wants to follow this architecture for our
> zvm servers (giving them 8G of memory and 8G of vdisk).
>
> My opinion is to give them a default of 2G of memory and 2G of vdisk and
> increase the main memory as needed.
>
> My colleague is concerned that if we give them less, they will always go
> to the distributed side since they give more. I am concerned about giving
> so much memory might negatively impact our zvm systems.
>
>
>
>
> ________________________________
>
> This e-mail, including any attachments, may be confidential, privileged or
> otherwise legally protected. It is intended only for the addressee. If you
> received this e-mail in error or from someone who was not authorized to
> send it to you, do not disseminate, copy or otherwise use this e-mail or
> its attachments. Please notify the sender immediately by replying to this
> e-mail and delete the e-mail from your system.
>
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> ----------------------------------------------------------------------
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>



--
John R. Campbell         Speaker to Machines          souperb at gmail dot
com
MacOS X proved it was easier to make Unix user-friendly than to fix Windows
"It doesn't matter how well-crafted a system is to eliminate errors;
Regardless
 of any and all checks and balances in place, all systems will fail because,
 somewhere, there is meat in the loop." - me

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to