> -----Original Message----- > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] > On Behalf Of Mark Zelden > Sent: Wednesday, September 11, 2013 12:34 PM > To: IBM-MAIN@LISTSERV.UA.EDU > Subject: Re: PDS/E, Shared Dasd, and COBOL V5 > > On Wed, 11 Sep 2013 19:08:19 +0000, Gibney, Dave <gib...@wsu.edu> > wrote: > > >> > >> Neither. zFS can safely be shared R/O outside sysplex / GRS > >> boundaries. BTW, with PDSESHARING(NORMAL), PDSE can > >> be shared R/W outside the sysplex, but the boundaries must still > >> be within the GRSplex. However, they can't be shared R/W between > >> systems managed by MII (MIM) because the ENQs issued for > >> SYSZIGW0 and SYSZIGW1 are issued with RNL=NO. IIRC, this > >> wasn't always the case and IBM "broke this" for CA MIM > >> customers around DFSMS/MVS 1.5. Thank you IBM. > >> > >> Regards, > > > >I usually don't disagree with Mark, but my experience is that without the XCF > >communication Barbara mentioned (i.e. inside a sysplex), updating a PDS/E > >from one LPAR can cause abends of address spaces in another LPAR that are > >actively using the PDS/E. > >Specifically, when VPS from LRS shipped with all PDS/E including the CNTL > >libraries, I abended my production VPS by modifying shared printer > >definitions from the development LPAR. > > > XCF is not involved when PDSE is using PDSESHARING(NORMAL). Were > you using PDSESHARING(EXTENDED)? WAS PDSESHARING(NORMAL) > in effect on BOTH systems involved? Were your 2 systems involved > in the same GRS ring? With PDSESHARING(NORMAL) you would not > have even been able to update a member on one system if the other system > had the PDSE opened for update. The sharing is on a data set level, not on a > member level (sharing at the member level on the same system does work).
Very simple set-up. No ring at all. Independent monplexes. NO serialization. Yes it is playing with fire :) For the most part, updates to files on the few shared volumes are done by a Sysprog. Down to two of us. The point of the shared executable and parm (cntl) is to attempt to minimize differences between production and devl/test LPARs while separating the impacts. If I know I am the only person updating a PDS and I know I'm doing it from LPAR A, I know I shouldn't have a compress job or something running in LPAR B. I even know that a job running in LPAR B having done a BLDL or such will always get the same old member for the life of that BLDL (insert known caution about new extents). Again, I know it is playing with fire, but it does work. As long as the updates are controlled, limited and one-way or at least not any degree of simultaneity. Update PDS/E from LPAR A and information already loaded by active address spaces in LPAR B becomes immediately invalid leading to fetch and other errors/abends. And as I stated in earlier note, our application change management system is dependent on updating the production libraries from the development LPAR while at the same time supporting ongoing read for execution from both production and development LPARs. > > Regards, > > Mark > -- > Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS > mailto:m...@mzelden.com > ITIL v3 Foundation Certified > Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html > Systems Programming expert at http://search390.techtarget.com/ateExperts/ > > > > > > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, send email to > lists...@listserv.ua.edu with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN