(Hi Mark!)

That's the disadvantage of starting before everyone else and having too
many servers :)
At least I've killed the sles7's!

The problem with sles8 to sles9x is it's a new server.  That requires
the cooperation of the users.  They don't like to do that if everything
is all hunky dory.  They have other things to do (so they tell me).

I'm hoping sles9x to sles10x is a true upgrade and we can do it without
bothering the applications folks.  That's a project to figure out over
the holiday freeze, though.

I'm pretty sure all of production will be sles9x within the next 2
months - woo hoo!  The promise of better performance from WAS6.1 and
sles9x saving them a few IFLs is finally getting their attention.

(see you next week).

Marcy Cortes
 
"This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation."


-----Original Message-----
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Mark Post
Sent: Friday, September 14, 2007 9:20 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: [LINUX-390] linux performance behind load balancer

>>> On Fri, Sep 14, 2007 at  6:48 AM, in message
<[EMAIL PROTECTED]>,
"Evans, Kevin R" <[EMAIL PROTECTED]> wrote: 
> Rob,
> 
> As we are just switching to Omegamon and almost up to implementation 
> of our first user to come into a new zLinux front end, can you give 
> ant further details on your comment below?

Prior to the kernels used in SLES10 and RHEL5, the way CPU consumption
was tracked by the Linux kernel didn't take into account that the system
may be running in a shared/virtualized environment.  The (valid until
LPARs, z/VM, VMware, and Xen) assumption in place was that the kernel
was in complete control of the hardware, so any passage of time between
the last clock value taken, and the current one, was assigned to
whatever process was dispatched in the interval.  The problem being, of
course, that the virtual machine/LPAR might not have been running at all
during that time.  So, Linux could report that the CPU was 100% busy,
when in fact it was only being dispatched, for example, 3% of the time.

Of the various performance monitors that were being marketed for
mainframe Linux, only Velocity Software's product combined the Linux
data with the z/VM monitor data, and normalized the Linux values to be
correct.  (Obviously this only worked in an environment where z/VM was
being used as the hypervisor.)  This was a big factor in many cases of
which monitor to choose.  Since the release of the "cpu accounting"
patches, and incorporation into SLES and RHEL, that's no longer the
case, unless you're still running SLES8 (Hi, Marcy!) and SLES9 (Hi,
almost everyone else!), or RHEL3 or 4.  Now the decision is based on
more traditional criteria, as opposed to being right or very wrong.

If you have a userid and password to access the SHARE proceedings, you
can see Martin Schwidefsky's presentation on this at
http://www.share.org/member_center/open_document.cfm?document=proceeding
s/SHARE_in_Seattle/S9266XX172938.pdf

(I have no idea why I didn't ask Martin for a copy of that for the
linuxvm.org web site.  Rats.)


Mark Post

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions, send
email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to