> -----Original Message-----
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of Mark Jacobs
> Sent: Tuesday, October 23, 2007 9:26 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
> 

<snip>

> >   
> Not exactly an answer to your question but;
> 
> If you are sharing DASD you will have to implement GRS or a GRS like 
> product. In a parallel sysplex you can use the GRSSTAR functionally 
> which is vastly improved over GRS ring processing.
> 
> With a basic sysplex you can implement GRS ring with sysplex support 
> while in two separate monoplexes you will have to use the 
> original GRS 
> (BCTC's) without the assistance of XCF communications.
> 
> There are also tape sharing considerations between sysplex 
> and monoplex 
> environments.
> 
> -- 
> Mark Jacobs

I should have mentioned that we already run MIMIT (MIM Integrity) to
stop programmers from destroying datasets (such as by linking into a
source PDS). This could also do many of the GRS type functions between
the two systems. We have also used MIM Allocation to share tape drives
in the past.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to