On Thu, Feb 14, 2008 at 1:54 AM, Paul Vincent
<[EMAIL PROTECTED]> wrote:

>  I'm new to z/VM and have a question.  Should I define virtual processors to
>  z/VM service ids/guests (TCPIP, Linux guests...) with the 'MACHINE ESA ## &
>  CPU #' control statements in the USER DIRECT file?  Is there a performance
>  benefit/cost, if I have more than 1 IFL, to define virtual processors equal
>  to the number of IFLs?  Or will a single virtual processor perform just
>  fine.

To avoid possible misunderstanding: there is no fixed relation between
virtual and real CPUs. You don't *have* to define multiple virtual
CPUs to use multiple real CPUs. z/VM will nicely spread the work of
multiple virtual machines over the real CPUs. Those who say you *must*
define virtual CPUs the same as real CPUs are from a different world
than most of us. In many cases things work better with only one
virtual CPU.

For CMS applications there is no benefit because additional virtual
CPU's will not be used. That's easy. With Linux, the answer is much
more complicated. That's one of the reasons progress on my "Virtual-MP
is Evil" paper is less than I would like it. There is a lot of detail
in this that you probably don't care about yet. Let me try to be very
brief...

The question is rather unique for Linux on z/VM. In the world of
dedicated servers you normally don't have the option to only add a CPU
but not change anything else. Most installations use "standard
configurations" - more like rental car classes (it can not only seat
more people, but also holds the bags of more people and has a bigger
engine, etc). So when you get a 2-way server, you also get more
memory, maybe different disks, etc. Without proper instrumentation you
may not be able to tell which of the items got you improved response
times, you only know the class D server was faster than the class A
server.
But with Linux on z/VM you *can* change a single configuration
parameter (in fact, many of us were trained to change only one thing
at a time). This flexibility only makes sense when you measure and
tune. Otherwise you might be better off doing classes (sometimes
combined with  "political sizing" as well).

Cost of defining multiple virtual CPUs for Linux comes in two areas:
* increased CPU usage inside Linux because of overhead related to
locking, scheduling, etc
* increased cost for z/VM to provide the virtual machine with the net
CPU capacity it consumes
When you have more than enough hardware (CPU as well as storage) and
you only have one virtual machine that you care about (a lab
environment, for example) you may be able to afford the cost. When you
have many Linux virtual machines running, efficiency of the operation
becomes an issue.

Linux can benefit from multiple virtual CPUs in some situations, but
only if both of these apply:
* if the application is able to use multiple CPUs during peak times
(through threads for example) Running multiple different aplications
on the same Linux server fits this, but it may not be the best way to
run them.
* if at peak times z/VM is likely to have enough CP capacity available
to run all virtual CPUs at the same time (only in a lab environment
with no real competition this is the same as the number of real CPUs
defined)

When there is a benefit for your server, the hard question is how that
relates to the increased cost of running it. When you have low
utilized servers, multiple virtual CPUs are normally not justified
(what Mark referred to: even when configured properly you will find a
low utilized server with multiple virtual CPUs more "sluggish" than a
virtual-UP server. But when your application is using a fair amount of
resources during longer periods, it may be justified to accept the
increased cost to have it complete the workload quicker.

Did I mention instrumentation and performance monitor? I should have -
that is what helps to understand your workload and how system
resources are being utilized.

Rob
-- 
Rob van der Heij
Velocity Software, Inc
http://velocitysoftware.com/

Reply via email to