You may not be able to create a VSAM dataset using a symbol for the volume
but after it is created you can alter the catalog entry to specify the
symbol.

Since you already have system symbols that contain unique values for each
system, why not add one more that specifies a unique qualifier for each ROOT
dataset.  Then use this symbol in the DSN you specify in BPXPRMxx.  Each
dataset would then have its own catalog entry (which might eliminate the
need for &hfsv1).  This might let you simplify your current HFS processing.
It will eliminate the name collision in the master catalog when you convert
to zFS.  (While this method will cause master catalog updates, which should
not be an issue, it does maintain the lack of impact to the production
LPARs.)

You could use this symbol in the DSN for most "common" datasets, VSAM or
non, which have separate copies on each system but it is only mandatory for
VSAM.

If you do a DELETE/NOSCRATCH on a volume, how can you then do a NEW/KEEP to
the same volume?  Did you mean you do a delete without updating the catalog?

:>: -----Original Message-----
:>: From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
:>: Behalf Of Bonno, Tuco
:>: Sent: Friday, October 26, 2012 4:13 AM
:>: To: IBM-MAIN@LISTSERV.UA.EDU
:>: Subject: maintain, distribute multiple copies same-named ZFS files
:>:
:>: operating environment here is all z/os 1.13
:>: how am i suppsed to clone and DISTRIBUTE (with emphasis on *distribute*)
:>: zfs files?
:>: CURRENTLY, I have a situation where 6 lpars, 3 production and 3 test,
:>: all share the same master catalog.  In that master catalog, the entry
:>: for OMVS.ROOT reads as follows, (i.a.,) :   "volser ----  &hfsv1" ;
:>: and there are 6 3390s out there, one for each lpar, each of which has a
:>: copy of OMVS.ROOT on it.  Currently, each such omvs.root is an HFS file.
:>: Whenever any lpar is ipled, the value of &hfsv1 is set bmo of an entry
:>: in an appropriate IEASYM-- in an appropriate PARMLIB, and each lpar runs
:>: with its own copy of the root,  called .... OMVS.ROOT .
:>: whenever i have a NEW COPY of omvs.root that needs to be distributed, i
:>: (1) drain the 3 test lpars; (2) go to the pack in each test lpar whcih
:>: contains  omvs.root and do a "DELETE/NOSCRATCH"  for the omvs.root on
:>: that pack ; (3)  from a 7th lpar, i copy the new version of omvs.root to
:>: the pack in each test lpar which is supposed to have a copy of omvs.root
:>: on it, using a DISP=(NEW,KEEP).  Later on, those 3 "test" lpars are
:>: ipled as production lpars.   in this methodology, the entry in the
:>: master catalog is never touched, and nothing in any of the 3 production
:>: lpars is ever impacted by whatever i may be doing in any test lpar.
:>: (This g.p. methodology is how i maintain ALL the o/s image dsn-s)  (and,
:>: btw, i use this methodology to distribute a couple DOZEN omvs/unix-
:>: system-services dsn-s, not just omvs.root)
:>:
:>: the recent emphasis from ibm is, has been, to convert one's HFS files to
:>: ZFS.
:>:
:>: so, now, given that zfs files are really vsam files,  (1) how does one
:>: maintain multiple copies of zfs files off of one master catalog, given
:>: that you can't catalog vsam files using symbolic values in the catalog
:>: entry ;  (2) next, assuming that there is some method -- unknown to me
:>: at the present moment -- to maintain multiple copes of identically-named
:>: vsam files off of one catalog , how do i go about distributing new
:>: copies of zfs files, using my methodology that I described above?  First
:>: of all, if I follow the above-described scenario, the first time i try
:>: to delete a zfs file (a vsam file, now) , isn't that going to also erase
:>: the catalog entry for it  (and wreak some havoc for the remaining 5
:>: lpars, which are also still using the catalog) ?   Second of all, there
:>: is/are the VVDS entries, something I need not bother myself about in my
:>: current methodology (b/c in that, no vsam files are involved) -- how do
:>: i keep them sync-ed up?   (3) i tried to consult several books about
:>: this , i.a., "z/os distributed file service file system implementation"
,
:>: "distributed file sevice zSeries file system Administration"  and "
:>: volume 9, ABCs of system programming" and "z/FS reorganization tool" ,
:>: and i just can't get this to work , at least in the context of the
:>: methodology i currently use to distribute
:>: omvs/unix-system-services  dsn-s.  Maybe i need to totally change my
:>: methodology?  EXACTLY how are other people doing this kind of thing?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to