We have done this to two 3-system parallel sysplexes.  One system was removed 
from each 3-system sysplex and the removed systems were each defined as a 
monoplex and rejoined to their former partners in a GRSPLEX.  So we now have 
two GRSPLEXes:  each with a 2-system parallel sysplex GRS'd with a monoplex.  

The complications were few but could be significant depending on your site.  
The clients only had one issue, and that was due to PDSE sharing.  

() We left the system names and the SMFID names the same but gave the 
monoplexes new sysplex names, which affected one system component and a few STC 
PROCs.

() PDSE sharing had to be downgraded to NORMAL for all three systems in each 
GRSPLEX, which is very problematic if the systems really share PDSEs for UPDATE 
because NORMAL is very restrictive (read the PDSE Usage Guide closely), so many 
PDSEs had to be cloned so that there is now one PDSE per system (that is, for 
many PDSEs, we went from one PDSE for 3 systems with ENHANCED sharing to three 
PDSEs, each for one system with NORMAL sharing).  Fault Analyzer PDSEs and 
lnklst PDSEs were the most affected.  SMSPDSE1 is no longer active with NORMAL 
sharing.

() The GRS CTCs had to be defined as BCTC so an HCD gen was required.  

() ECS must be turned off so one may see a performance impact 

() Automatic tape sharing cannot cross the sysplex boundary so the tape drives 
had to be divided, but the systems in the sysplex could still handle their 
drives on their own.

() Obviously, the monoplex cannot still participate in the sysplex's spool, so 
you may need to setup a spool and NJE.  Depending on how you submit jobs, this 
may or may not become another issue (like, for a scheduler or restart 
subsystem).

() Some components that use XCF must be reconfigured to use TCPIP or SNA to 
communicate, like CCI and Mainview.

There are other issues like BRODCAST and DAE but they are minor.  The biggest 
loss after PDSE sharing was in recovery.  A crash or even normal shutdown must 
be responded to carefully or GRS may just turn off across the remaining GRS 
members.  GRS can be restarted but it is not always obvious that this has 
happened as message fly past.  You may want to ensure that automation grabs all 
the messages, if you have automation, to alert the operator and instruct the 
operator how to restart GRS on the remaining members.

regards, Joe D'Alessandro

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to