On 02/17/2015 03:58 AM, Jaco Kruger wrote:
> The following components are shared between Development and Production 
> environments 
> The following components are not shared between Development and Production 
> environments

Jaco, it's not clear from your post if these passages describe the
current=before-split actual state or the future=after-split desired
state.  As others have mentioned, if this is desired state you can't do
(all of) that... or where you "sort of" can, it's not the commonly held
meaning of sharing (more on that below, using zWLM as an example).

Looking at the Merging doc is a good choice.  To the degree the "why"s
are described in there, look for cases where it's not merely "which
systems" force the change, but which _work_.

zWLM example (disclaimer: in a prior life, I was one of the lead
perpetrators of goal mode, starting with WSM, but I've been out of that
group for 10 years).  Velocity goals are intimately dependent on the
volume of work, number of CPUs, etc.; see my CMG95 paper for gory details.

Regardless of goal type, if you have service classes that before-split
run work from both production and development, splitting that workload
(which you have no choice in, once you split the sysplex ... WLM doesn't
cross sysplex boundaries, period) means that the subset of the workload
characterization data each after-split WLM instance sees will be
different.  Potentially very different.  Thus, policy goals will need to
be adjusted.

The degree of pain associated with this very much depends on the degree
to which your before-split policy already separates production and devt
work in to distinct service classes.  If the before-split policy pretty
much partitions work already, then it's only cross-service class effects
coupled with "how much of the machine is available to any given
class/period" that dominates.  If they're all mashed together today, its
hard to say much more than "things will change"; if you can partition
their reporting before hand, that will help.  But you should plan on
watching prod Very Carefully after the split to see where goals need to
be adjusted, in any case, unless you're system is fundamentally
unconstrained.  If your policy goals don't work as desired when it IS
constrained, you have an upstream problem.  Crazy example: if your prod
work was all discretionary (below dev), and system CPU is 50%, you might
never notice.  If something starts looping and the CPU is now at 100%,
you Will notice.

The only "sort of" cross-sysplex WLM thing you can do is about managing
the service definition, not what happens at run time.  You can, if you
want, use a single service definition "master copy" that you clone
across all these (2, here) sysplexes; basically, edit the master
whereever it is, export it to a file, move/copy the file (clone it),
import it in the other sysplexes.  Some people do that; you generally
set up service classes and classification rules so that any given
sysplex only runs work in "its" subset of the classes.  Having extra
classes that never receive work in a given sysplex causes minimal
overhead.  Other more complicated ("more shared") svdefs can be created,
but I doubt anyone outside of WLM devt could actually manage it
effectively... just too easy to make a mistake.

If, in your dev/prod case, dev is "just" a mirror of prod used as a
staging area ("same" work, running in same srvclasses, just with looser
goals), then the master svdef makes good sense - you can keep the
classification rules identical, and just have separate policies
activated for dev vs prod.  Be wary of creep though - often when dev
starts out as a mirrored staging area for prod, over time it gets other
work dumped in there never intended to hit prod, i.e. it drifts away
from being a simple mirror/staging area.  The further it drifts, the
less a shared svdef makes sense.

-- 
John Arwe
IBM z/VM OpenStack and zKVM

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to