On Apr 13, 12:57 am, nitz-...@gmx.net (Barbara Nitz) wrote:
> Nick,
> 
> from your email I figured you're somewhere in LOGR development :-) And I am
> absolutely glad someone finally 'gets' my paranoia. I have a hard time getting
> the problem across to my colleagues, too! Mostly because they don't really
> understand how offload works.
Barbara,

Yes i'm in logger development.

> Also, keep in mind that I am not saying that we already *had* a corrupted
> RRS log stream, I just see a big timing window (that we will probably hit
at the
> first opportunity - we always hit obscure timing problems) if we activate LOGR
> in both halves.

Usually the system that does the offload for a log stream is the system that
does the write causing the HIGHOFFLOAD threshold to be exceeded. So if your
utility program ran on your TEST side, but didn't write to the log stream,
the risk would be smaller. But it doesn't always have to be that way.
Offloads can be started for various reasons such as structure events,
recovery, and offload failures on other systems.  So the worry is warranted.


> Now he tells me! Would you please explain to the IBM pricing people that their
> PLSC pricing schemes make customers do this which is absolutely contrary to
> the parallel sysplex design points?!?

I'm not sure I have that much sway, but I definitely sympathize with the
ironic nature of the setup. 


Art,
> As long IBM marketing dangles the carrot, coroporate IT will continue to
> support that, and only that, which is necessary to save $$.  "Shamplex" has
> become part of the nomenclature...
> 
> As for "where" support to restrict offload locations, this too was proposed by
> a poster last year, and so well-written that some of us ran to the books to
> read what we thought we missed!  I would definitely make use of the feature.
> 
> In the meantime, I'm looking into RACF profiles to prevent connectors on non-
> owning images, which in turn restricts offloads.  That will only work so
long as
> I have a RACF database per subplex.  I reckon we'd better have SMS sorted
> out before we start RACF data sharing...

I think the marketing solution would either be more restrictive or more
costly.  But perhaps there are a few things that can be done with existing
functions that might help.

RACF might help.  If you can prevent logstreams from being used on systems
with RACF or prevent data sets from being allocated that have log stream
data set names, that might avoid the contention.  Turning off logger would
do it too, but probably not ideal.  


This may have been discussed before, but have either of you considered using
SMS classes to fence test and production dasd from each other, and use the
LS_xxxClas / STG_xxxCLAS log stream parameters to separate test and
production log streams?  If a production log stream accidentally connected
on a test system, it might be able to get to the right dasd pool, and
something truly shared like operlog might work for the whole plex.


In V1R8 logger did add an option to separate test and production work on a
log stream basis, but it was intend for clients who wanted to run test and
production work on the same system.  Logger sets up separate tasks for test
and production work, and specifying the GROUP(TEST) or GROUP(PRODUCTION)
option on the log stream definition will tell the log stream wich set of
tasks to use.  There are also restrictions enforced, such as a test log
stream can't connect to a structure with a production log stream connected
to it. It also prevents test log streams from using more than 25% of the
structures. The goal was to prevent "TEST" log streams from harming
"PRODUCTION" log streams.

However, this doesn't help clients who want to separate workloads on a
system basis.


Is there a reason for such a stringent separation of DASD? Is it for failure
prevention? Does accounting data get messed up?  It sounds to me like a
completely separate sysplex is out of the question, because it costs more
than extra set of systems in an existing plex.  Maybe it would help if I
understood what's being walled out and what's being walled in.



-Nick Jones
Logger L3

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to