It really depends on your application mix.

If you have a single guest, such as zLinux running, then VM really
doesn't have much to say about it.  Unless you can make good use of VM
specific features, such as Minidisk Cache, or VSWITCH or Hypersockets,
VM can't give you better then what your guest does.

However, if you ramp up with many guests, production systems, test
systems, utility systems, etc.  And you give the CP scheduler something
to work with, i.e. set share, VM can run 100% (times the number of
processors) and stay there, with good response time.

Paging is fine.  Thrashing isn't.
100% CPU utilization is fine, especially if you have some lower
priority systems to mop up spare cycles.
I/O doesn't get any better.  Especially if you are smart about it. 
Ficon, Ranks, all that dasd subsystem stuff.

Of course, you have the ability to make things run a lot worse.  This
is a shared environment, not dedicated like LPAR or other servers are. 


Mainframes are really, really great at context switching.  That is what
other servers tend to fall apart on, when you ramp up.  Same with I/O. 


Memory and CPU, not so good, in comparison.  Why?  Because memory and
CPU are much more expensive than other platforms.

But with the I/O and context switching, you can use ALL of your
expensive CPU and memory and hence get your monies worth out of them.  

Plus on the Linux side, you can save a lot on software licensing.  1
IFL, 1 engine, 1 license.  No matter how many copies you run.

Well, back on topic...

I have never read anything that suggests that VM isn't in the same
ballpark in CPU utilization and growth as MVS.  

Tom Duerbusch
THD Consulting

>>> [EMAIL PROTECTED] 10/25/2006 11:30 AM >>>
I know this is an "it depends" question, but I hope some of you can
give
me a very general answer. As an MVS guy, I'm used to being able to run
the processor very close to or even at 100% without significant
performance degradation.  Assuming that everything is configured and
tuned properly (a big assumption, I know), can VM drive the processor
the same way?  Our application people are used to other platforms that
don't tolerate high CPU utilization so well and think things are going
to start falling apart when we hit 80%.  I'd like to reassure
them--but
only if it's accurate to do so!  This is a WebSphere application
running
on multiple SUSE instances, with the data on DB2 under z/OS.  Is it
reasonable for me to expect--again, assuming everything else is
right--to be able to run at 90+ percent without problems?



Richard Heritage
Lead Systems Software Engineer
IT @ Johns Hopkins

Reply via email to