There is actually more science to this than one would imagine.  We just went
through this and chatted with Bill Bitner on it to try and understand it
all.  Here are the things to keep in mind for setting SHARE (besides what is
already written in the HELP):

For RELATIVE share, the amount should be X times the number of virtual
processors the guest has.  So if you want all servers to be "equal" at REL
100 and you have a server with two virtual CPUs, if you set it to REL 100 -
it is really setting to REL 50 for EACH vcpu.   It would need REL 200 to be
equal to all others at REL 100.

For ABSOLUTE the vcpus don't matter, but the REAL cpus do.  If you want to
cap the web server to not use more than 50% of a real processor, then it is
X% / #_real_cpus    That means if you have a server you want to cap at 50%
and you have 11 IFLs, you would use SET SHARE REL nnn ABSOLUTE
4.5%LIMITHARD   (that is 50 / 11 )  The LIMITHARD will prevent it from
going
over it's limit, otherwise if the cpu is available the server will get it.

It takes a little to get used to and I just covered some basics.  Hopefully
with everyone elses update and this it gets you set for what you need.

Jim Vincent

On 8/2/07, Rob van der Heij <[EMAIL PROTECTED]> wrote:
>
> On 8/2/07, Anne Crabtree <[EMAIL PROTECTED]> wrote:
> > We have five linux instances running, two of which aren't really being
> > used, two are production and one is test.  In the test linux instance,
> > they are running a web app (which I know nothing about).  At certain
> > times during the day, this test instance will take close to 100% of the
> > IFL.  We want to back it off so that it can take no more than 50% to see
> > how that affects them.  Would it be logical to issue SET SHARE linux
> > RELATIVE 50 since it is defaulting to REL 100?  Will that do what I
> > think and limit it to half of what the other virtual machines "get"?
>
> Really depends on what you want to achieve.
> * When you set REL 50 and others have REL 100, the one with smaller
> share will get only half of what another would get *if there is CPU
> shortage*  It's something you do for test versus production servers
> * If you want to "kneecap" the virtual machine to see what the
> application behavior would have been when you did not have all that
> spare capacity, then a share with LIMITHARD is useful (and you want
> that absolute% because otherwise you have a moving target).
>
> And for virtual machines with multiple virtual CPU's, you need to
> realize share is divided by the number of virtual CPU's
>
> When a virtual machine is looping or doing useless work that you want
> to get rid of, then FORCE is more appropriate ;-)   But yes, I have
> sometimes set virtual machines to REL 1 LIMITHARD to let them run
> until I have time to start diagnose the problem.
>
> Rob
> --
> Rob van der Heij
> Velocity Software, Inc
> http://velocitysoftware.com/
>

Reply via email to