Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Arthur Gutowski
On Wed, 21 Jan 2009 08:18:44 -0600, Mark Zelden 
mark.zel...@zurichna.com wrote:
[...]
There is no requirement for the directory
file system, but I've seen a shop's sysplex root grow because all of
the directories / mount points were getting created within the sysplex
root.  This eventually led to a sysplex wide IPL in order to re-create
a larger sysplex root.   Oh... make sure you are using a single
BPXPRMxx member if not already and pay attention to AUTOMOVE /
UNMOUNT in your MOUNT defintions.
[...]

Wow, that must have been some growth.  Either that or the sysplex root was 
defined with an exceedingly small primary extent and exceedingly small or no 
secondary extents.  It's been a while since I looked, but isn't the architected 
limit something like 123 extents or 59 volumes, whichever you hit first?

As for AUTOMOVE / UNMOUNT, not only do you have to pay attention in terms 
of inheritance (filesystems mounted off system root and the system root 
should be set the same), but it may also have implications for your 
clone/migration process.  I've found the need to add an unmount step to our 
clone JCL because we chose AUTOMOVE, and we have only once performed a 
sysplex-wide shutdown  IPL.

Regards,
Art Gutowski
Ford Motor Company

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 08:44:08 -0600, Arthur Gutowski aguto...@ford.com wrote:

On Wed, 21 Jan 2009 08:18:44 -0600, Mark Zelden
mark.zel...@zurichna.com wrote:
[...]
There is no requirement for the directory
file system, but I've seen a shop's sysplex root grow because all of
the directories / mount points were getting created within the sysplex
root.  This eventually led to a sysplex wide IPL in order to re-create
a larger sysplex root.   Oh... make sure you are using a single
BPXPRMxx member if not already and pay attention to AUTOMOVE /
UNMOUNT in your MOUNT defintions.
[...]

Wow, that must have been some growth.  Either that or the sysplex root was
defined with an exceedingly small primary extent and exceedingly small or no
secondary extents.  It's been a while since I looked, but isn't the architected
limit something like 123 extents or 59 volumes, whichever you hit first?

I can't go back and look since it was a former client of mine, but a few 
points:

1) It was probably created with the sample SYS1.SAMPLIB(BPXISYSR),
which has a space definition of 2 cyl and no secondary.  Even if they
added a secondary of 2, 250 cyls isn't much.

2) While you can add a secondary extent with CONFIGHFS, adding volumes
requires an unmount / remount of the HFS.  

3) A 3390-3 isn't all that big anyway.  


As for AUTOMOVE / UNMOUNT, not only do you have to pay attention in terms
of inheritance (filesystems mounted off system root and the system root
should be set the same), but it may also have implications for your
clone/migration process.  I've found the need to add an unmount step to our
clone JCL because we chose AUTOMOVE, and we have only once performed a
sysplex-wide shutdown  IPL.


Good point.  That isn't part of our JCL, but it is part of our procedure to 
check if the target sysres set being reused still has its unix files mounted
(the most likely answer is YES).   If it is, there is a rexx exec to unmount
them.   I suppose it makes sense to just add that as a step to the clone
JCL for those sysplexes.   

I can't do that in one of my other environments because we share the
sysres set across sysplexes and the cloning is done from a development
sysplex that has the maintenance unix files mounted.  We have to manually
go check in the 3 sysplexes (one sandbox) that has the shared file systems
and unmount the target set's files.  The sharing of resources between 2 
of those 3 sysplexes (prod/devl) goes back long before the existence of
PDSE and HFS (and the XCF / sysplex requirement for sharing) and they use 
MII for integrity. Thus the kludge.  

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Guy Gardoit
I'm having trouble understanding why an IPL'ed system's sysres and USS files
are being used for cloning.  I've always used a staging concept wherein
the SMP/E target sysres and USS file(s) for each running z/OS release are *
never* IPLed.   This relieves a lot of issues and provides a base, if you
will, for all maintenance and cloning.What is perceived as disadvantages
to using this type of staged clone process?   I've never had any issues
with it but I may just have been lucky.  TIA

Guy Gardoit
z/OS Systems Programming

On Wed, Jan 28, 2009 at 10:18 AM, Mark Zelden mark.zel...@zurichna.comwrote:

 On Wed, 28 Jan 2009 08:44:08 -0600, Arthur Gutowski aguto...@ford.com
 wrote:

 On Wed, 21 Jan 2009 08:18:44 -0600, Mark Zelden
 mark.zel...@zurichna.com wrote:
 [...]
 There is no requirement for the directory
 file system, but I've seen a shop's sysplex root grow because all of
 the directories / mount points were getting created within the sysplex
 root.  This eventually led to a sysplex wide IPL in order to re-create
 a larger sysplex root.   Oh... make sure you are using a single
 BPXPRMxx member if not already and pay attention to AUTOMOVE /
 UNMOUNT in your MOUNT defintions.
 [...]
 
 Wow, that must have been some growth.  Either that or the sysplex root was
 defined with an exceedingly small primary extent and exceedingly small or
 no
 secondary extents.  It's been a while since I looked, but isn't the
 architected
 limit something like 123 extents or 59 volumes, whichever you hit first?

 I can't go back and look since it was a former client of mine, but a few
 points:

 1) It was probably created with the sample SYS1.SAMPLIB(BPXISYSR),
 which has a space definition of 2 cyl and no secondary.  Even if they
 added a secondary of 2, 250 cyls isn't much.

 2) While you can add a secondary extent with CONFIGHFS, adding volumes
 requires an unmount / remount of the HFS.

 3) A 3390-3 isn't all that big anyway.

 
 As for AUTOMOVE / UNMOUNT, not only do you have to pay attention in terms
 of inheritance (filesystems mounted off system root and the system
 root
 should be set the same), but it may also have implications for your
 clone/migration process.  I've found the need to add an unmount step to
 our
 clone JCL because we chose AUTOMOVE, and we have only once performed a
 sysplex-wide shutdown  IPL.
 

 Good point.  That isn't part of our JCL, but it is part of our procedure to
 check if the target sysres set being reused still has its unix files
 mounted
 (the most likely answer is YES).   If it is, there is a rexx exec to
 unmount
 them.   I suppose it makes sense to just add that as a step to the clone
 JCL for those sysplexes.

 I can't do that in one of my other environments because we share the
 sysres set across sysplexes and the cloning is done from a development
 sysplex that has the maintenance unix files mounted.  We have to manually
 go check in the 3 sysplexes (one sandbox) that has the shared file systems
 and unmount the target set's files.  The sharing of resources between 2
 of those 3 sysplexes (prod/devl) goes back long before the existence of
 PDSE and HFS (and the XCF / sysplex requirement for sharing) and they use
 MII for integrity. Thus the kludge.

 Mark
 --
 Mark Zelden
 Sr. Software and Systems Architect - z/OS Team Lead
 Zurich North America / Farmers Insurance Group - ZFUS G-ITO
 mailto:mark.zel...@zurichna.com
 z/OS Systems Programming expert at
 http://expertanswercenter.techtarget.com/
 Mark's MVS Utilities: 
 http://home.flash.net/~mzelden/mvsutil.htmlhttp://home.flash.net/%7Emzelden/mvsutil.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 10:31:05 -0500, Guy Gardoit ggard...@gmail.com wrote:

I'm having trouble understanding why an IPL'ed system's sysres and USS files
are being used for cloning.  I've always used a staging concept wherein
the SMP/E target sysres and USS file(s) for each running z/OS release are *
never* IPLed.   This relieves a lot of issues and provides a base, if you
will, for all maintenance and cloning.What is perceived as disadvantages
to using this type of staged clone process?   I've never had any issues
with it but I may just have been lucky.  TIA

Because they stay mounted across IPLs in a shared file system - even 
though they may not be in use.

For example,  you are IPLed on set A, then you roll out set B. Half the
plex is using the A set of sysres files, half the B set.  Now you roll out
the C set.  The rest of the plex is now on B or C, but none are using
the A set.You need to clone a new set and A is available.  Unless
you had done a sysplex wide IPL, set A's files are still mounted in the
sysplex.

HTH,

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Guy Gardoit
Actually, they don't.  They are only mounted (to a /service' directory in
the sandbox system) when service is applied; unmounted when service
application is complete.  The way I do staging is to NEVER used the staging
resvol(s) nor $VERSION FS(s) files for an IPL'ed system.   They are only
used for cloning and nothing else.   There are no issues with quiescing any
staging FS since it is not mounted anywhere during the cloning process (one
simple job), there is a base resvol(s) that is NEVER IPled.

To make things clearer, bear with me while I give an example.  A sysplex has
two possible volumes to IPL from, say SYSRS1 or SYSRS2,  which implies two
possible VERSION ROOT filesystem(s), say OMVS.**.SYSRS1 or
OMVS.**.SYSRS2.   These volumes and $VERSION ROOT file systems (the ones NOT
IPLed from - which implies cloning cannot be done unless all systems in the
plex have been migrated to ONE sysres/$VERSION; IMO a reasonable and
desirable situation) are recreated from the staging resvol, say STGRES, and
staging $VERSION ROOT FS, say OMVE.**.STGRES.

The clone process uses the staging items as the source and the unused
sysres/$VERSION ROOT FS(s) as the destination.

Guy Gardoit
z/OS Systems Programming

On Wed, Jan 28, 2009 at 10:58 AM, Mark Zelden mark.zel...@zurichna.comwrote:

 On Wed, 28 Jan 2009 10:31:05 -0500, Guy Gardoit ggard...@gmail.com
 wrote:

 I'm having trouble understanding why an IPL'ed system's sysres and USS
 files
 are being used for cloning.  I've always used a staging concept wherein
 the SMP/E target sysres and USS file(s) for each running z/OS release are
 *
 never* IPLed.   This relieves a lot of issues and provides a base, if
 you
 will, for all maintenance and cloning.What is perceived as
 disadvantages
 to using this type of staged clone process?   I've never had any issues
 with it but I may just have been lucky.  TIA

 Because they stay mounted across IPLs in a shared file system - even
 though they may not be in use.

 For example,  you are IPLed on set A, then you roll out set B. Half the
 plex is using the A set of sysres files, half the B set.  Now you
 roll out
 the C set.  The rest of the plex is now on B or C, but none are using
 the A set.You need to clone a new set and A is available.  Unless
 you had done a sysplex wide IPL, set A's files are still mounted in the
 sysplex.

 HTH,

 Mark
 --
 Mark Zelden
 Sr. Software and Systems Architect - z/OS Team Lead
 Zurich North America / Farmers Insurance Group - ZFUS G-ITO
 mailto:mark.zel...@zurichna.com
 z/OS Systems Programming expert at
 http://expertanswercenter.techtarget.com/
 Mark's MVS Utilities: 
 http://home.flash.net/~mzelden/mvsutil.htmlhttp://home.flash.net/%7Emzelden/mvsutil.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 11:15:43 -0500, Guy Gardoit ggard...@gmail.com wrote:

Actually, they don't.  They are only mounted (to a /service' directory in
the sandbox system) when service is applied; unmounted when service
application is complete.  The way I do staging is to NEVER used the staging
resvol(s) nor $VERSION FS(s) files for an IPL'ed system.   They are only
used for cloning and nothing else.   There are no issues with quiescing any
staging FS since it is not mounted anywhere during the cloning process (one
simple job), there is a base resvol(s) that is NEVER IPled.

To make things clearer, bear with me while I give an example.  A sysplex has
two possible volumes to IPL from, say SYSRS1 or SYSRS2,  which implies two
possible VERSION ROOT filesystem(s), say OMVS.**.SYSRS1 or
OMVS.**.SYSRS2.   These volumes and $VERSION ROOT file systems (the ones NOT
IPLed from - which implies cloning cannot be done unless all systems in the
plex have been migrated to ONE sysres/$VERSION; IMO a reasonable and
desirable situation) are recreated from the staging resvol, say STGRES, and
staging $VERSION ROOT FS, say OMVE.**.STGRES.

The clone process uses the staging items as the source and the unused
sysres/$VERSION ROOT FS(s) as the destination.

Guy Gardoit
z/OS Systems Programming


I'm not sure I understand what you are saying. 

First, you say you won't roll out a new sysres set until all systems are
migrated to the same (most current) set. Correct?  That's true most 
of the time for us, but not always (oh, for a perfect world - that is why
we need at least 3 target sets).  Regardless, it has nothing to do 
with the problem at hand.  It can happen with just 2 sets.

Second, you say the clone process uses the unused sysres version root
as the target.That is exactly what we do.   The problem is, if you
do rolling IPLs, regardless of all systems being IPLed on the same version
root / files, even the unused version files (destination to use your 
terminology) will remain mounted unless you take overt action to 
unmount them.  

Do you always do sysplex wide IPLs?  What am I missing?

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Guy Gardoit
If all systems in a plex are IPLed from the new sysres-root combo, the old
FS are not mounted anywhere and can be deleted by the clone job. And,
yes, it does take a perfect world to get each sysplex to be on *one* set
of resvol(s) and one $VERSION root FS but I guess I have lived in a perfect
world because that was not a problem in our 4-parallel sysplex complex.

We don't always do sysplex-wide IPLs but roll out the new sysres-root combo
to all members of our plexes as soon as doable. Once all systems in a plex
are on the new set, the old set is not mounted anywhere.  I don't
understand why yours would be.  The next clone process does not take place
until that task is accomplished.   If an emergency fix is required, each
IPLable sysres has an associated SMP/E target zone to make that possible
but, of course, not desirable.

Hope this clears things up a bit.

Guy Gardoit
z/OS Systems Programming

On Wed, Jan 28, 2009 at 12:08 PM, Mark Zelden mark.zel...@zurichna.comwrote:

 On Wed, 28 Jan 2009 11:15:43 -0500, Guy Gardoit ggard...@gmail.com
 wrote:

 Actually, they don't.  They are only mounted (to a /service' directory in
 the sandbox system) when service is applied; unmounted when service
 application is complete.  The way I do staging is to NEVER used the
 staging
 resvol(s) nor $VERSION FS(s) files for an IPL'ed system.   They are only
 used for cloning and nothing else.   There are no issues with quiescing
 any
 staging FS since it is not mounted anywhere during the cloning process
 (one
 simple job), there is a base resvol(s) that is NEVER IPled.
 
 To make things clearer, bear with me while I give an example.  A sysplex
 has
 two possible volumes to IPL from, say SYSRS1 or SYSRS2,  which implies two
 possible VERSION ROOT filesystem(s), say OMVS.**.SYSRS1 or
 OMVS.**.SYSRS2.   These volumes and $VERSION ROOT file systems (the ones
 NOT
 IPLed from - which implies cloning cannot be done unless all systems in
 the
 plex have been migrated to ONE sysres/$VERSION; IMO a reasonable and
 desirable situation) are recreated from the staging resvol, say STGRES,
 and
 staging $VERSION ROOT FS, say OMVE.**.STGRES.
 
 The clone process uses the staging items as the source and the unused
 sysres/$VERSION ROOT FS(s) as the destination.
 
 Guy Gardoit
 z/OS Systems Programming
 

 I'm not sure I understand what you are saying.

 First, you say you won't roll out a new sysres set until all systems are
 migrated to the same (most current) set. Correct?  That's true most
 of the time for us, but not always (oh, for a perfect world - that is why
 we need at least 3 target sets).  Regardless, it has nothing to do
 with the problem at hand.  It can happen with just 2 sets.

 Second, you say the clone process uses the unused sysres version root
 as the target.That is exactly what we do.   The problem is, if you
 do rolling IPLs, regardless of all systems being IPLed on the same version
 root / files, even the unused version files (destination to use your
 terminology) will remain mounted unless you take overt action to
 unmount them.

 Do you always do sysplex wide IPLs?  What am I missing?

 Mark
 --
 Mark Zelden
 Sr. Software and Systems Architect - z/OS Team Lead
 Zurich North America / Farmers Insurance Group - ZFUS G-ITO
 mailto:mark.zel...@zurichna.com
 z/OS Systems Programming expert at
 http://expertanswercenter.techtarget.com/
 Mark's MVS Utilities: 
 http://home.flash.net/~mzelden/mvsutil.htmlhttp://home.flash.net/%7Emzelden/mvsutil.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 12:52:49 -0500, Guy Gardoit ggard...@gmail.com wrote:


We don't always do sysplex-wide IPLs but roll out the new sysres-root combo
to all members of our plexes as soon as doable. Once all systems in a plex
are on the new set, the old set is not mounted anywhere.  I don't
understand why yours would be.  

And I don't understand how yours aren't.  :-)   Don't you use AUTOMOVE
for your version files?  Without overt action to unmount them, they would
move to another system forever if you always do rolling IPLs.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Guy Gardoit
Eh!  I do apologize.   The first step in the clone job performs a Rexx exec
that ensures no system in the plex is running from the target resvol and
also dis-mounts the associated FS.   So routine that I forgot what the job
does!

Guy Gardoit
z/OS Systems Programming

On Wed, Jan 28, 2009 at 1:13 PM, Mark Zelden mark.zel...@zurichna.comwrote:

 On Wed, 28 Jan 2009 12:52:49 -0500, Guy Gardoit ggard...@gmail.com
 wrote:


 We don't always do sysplex-wide IPLs but roll out the new sysres-root
 combo
 to all members of our plexes as soon as doable. Once all systems in a
 plex
 are on the new set, the old set is not mounted anywhere.  I don't
 understand why yours would be.

 And I don't understand how yours aren't.  :-)   Don't you use AUTOMOVE
 for your version files?  Without overt action to unmount them, they would
 move to another system forever if you always do rolling IPLs.

 Mark
 --
 Mark Zelden
 Sr. Software and Systems Architect - z/OS Team Lead
 Zurich North America / Farmers Insurance Group - ZFUS G-ITO
 mailto:mark.zel...@zurichna.com
 z/OS Systems Programming expert at
 http://expertanswercenter.techtarget.com/
 Mark's MVS Utilities: 
 http://home.flash.net/~mzelden/mvsutil.htmlhttp://home.flash.net/%7Emzelden/mvsutil.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 13:40:14 -0500, Guy Gardoit ggard...@gmail.com wrote:

Eh!  I do apologize.   The first step in the clone job performs a Rexx exec
that ensures no system in the plex is running from the target resvol and
also dis-mounts the associated FS.   So routine that I forgot what the job
does!

Guy Gardoit
z/OS Systems Programming

Ahhh... glad you found it.  I hate mysteries in data processing.  So you are
basically doing the same thing as Art is.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


USS file sharing in z/OS - Version Ugrade

2009-01-21 Thread Cwi Jeret
We have Implemented Shared HFS in Our Sysplex with 4 Members , 
a few months ago.

Now, we want to upgrade the SYSTEM of 1 of the 4 members of 
the sysplex with an Updated ROOT ,as part of Maintenace service for that 
system , but leaving the 3 other members in the old state.

As reccomended in UNIX System Services Planning in Chapter :
7.6.2 Adding a system-specific or version root file system to your shared file 
system configuration
we added the new ROOT as a new Version to the file system .
This new root will be the default root for the upgraded system after IPL .

The problem is that File Systems mounted currently with their Mountpoint on 
the Old Root , prevent these File Systems to mount their Mountpoint on the 
New Root for the Upgraded SYSTEM.
In this situation we cannot run the Applications running these File systems
on the Upgraded member of the SYSPLEX

Does anyone know a solution for this problem ?

Cwi Jeret 
Bank-Hapoalim T.A
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-21 Thread Veilleux, Jon L
We use symbolic links to point to any of the files that are not
distributed as part of the root. This allows us the flexibility to share
files between system roots. When we first build the system root, we add
a /product symbolic link to /usr/lpp which points to a separately
mounted directory (under the system root) where we create subdirectories
to mount all of our products.
There are a couple of SHARE presentations on how to accomplish this.
Feel free to contact me off list if you want more information.
Jon


Jon L. Veilleux 
veilleu...@aetna.com 
(860) 636-2683 


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Cwi Jeret
Sent: Wednesday, January 21, 2009 5:12 AM
To: IBM-MAIN@bama.ua.edu
Subject: USS file sharing in z/OS - Version Ugrade

We have Implemented Shared HFS in Our Sysplex with 4 Members , a few
months ago.

Now, we want to upgrade the SYSTEM of 1 of the 4 members of the sysplex
with an Updated ROOT ,as part of Maintenace service for that system ,
but leaving the 3 other members in the old state.

As reccomended in UNIX System Services Planning in Chapter :
7.6.2 Adding a system-specific or version root file system to your
shared file system configuration
we added the new ROOT as a new Version to the file system .
This new root will be the default root for the upgraded system after IPL
..

The problem is that File Systems mounted currently with their Mountpoint
on the Old Root , prevent these File Systems to mount their Mountpoint
on the New Root for the Upgraded SYSTEM.
In this situation we cannot run the Applications running these File
systems on the Upgraded member of the SYSPLEX

Does anyone know a solution for this problem ?

Cwi Jeret
Bank-Hapoalim T.A
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
This e-mail may contain confidential or privileged information. If
you think you have received this e-mail in error, please advise the
sender by reply e-mail and then delete this e-mail immediately.
Thank you. Aetna   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-21 Thread Mark Zelden
On Wed, 21 Jan 2009 04:12:01 -0600, Cwi Jeret cwi_je...@yahoo.com wrote:

We have Implemented Shared HFS in Our Sysplex with 4 Members ,
a few months ago.

Now, we want to upgrade the SYSTEM of 1 of the 4 members of
the sysplex with an Updated ROOT ,as part of Maintenace service for that
system , but leaving the 3 other members in the old state.

As reccomended in UNIX System Services Planning in Chapter :
7.6.2 Adding a system-specific or version root file system to your shared file
system configuration
we added the new ROOT as a new Version to the file system .
This new root will be the default root for the upgraded system after IPL .

The problem is that File Systems mounted currently with their Mountpoint on
the Old Root , prevent these File Systems to mount their Mountpoint on the
New Root for the Upgraded SYSTEM.
In this situation we cannot run the Applications running these File systems
on the Upgraded member of the SYSPLEX

Does anyone know a solution for this problem ?


Yes.  Nothing should be mounted off of you version root that is not part
of the OS.  There are 2 scenarios.

For things you share you  mount a directory file system off of your
sysplex root (for shared file systems) and then mount other file 
systems off of that one.  There is no requirement for the directory 
file system, but I've seen a shop's sysplex root grow because all of
the directories / mount points were getting created within the sysplex
root.  This eventually led to a sysplex wide IPL in order to re-create
a larger sysplex root.   Oh... make sure you are using a single
BPXPRMxx member if not already and pay attention to AUTOMOVE / 
UNMOUNT in your MOUNT defintions.  

In the case of something that is in the root that you don't share, 
you do the same sort of thing with symbolic links that is documented
for setting up CRON with a read only root (see Unix System Services
Planning).  This needs to be done each time you do an OS
upgrade (have a new root distributed with ServerPac).

For example, lets say each system has some unique file system
needed to be mounted at  /usr/local (which is in the OS / version root). 
 
1) Create /etc/local  (/etc is chosen because it is already a system
specific file system)

2) Create a symbolic link for /usr/local that points to /etc/local 
 .. so if your maintenance OS / version root is mounted 
at /service

rm -fr /service/usr/local   
ln -s /etc/local /service/usr/local 

3) You need to IPL with the new root that has the symbolic LINK
Mount your file system at MOUNTPOINT('/SYSNAME./etc/local') 
and use the UNMOUNT parm. 

Hope this helps.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html