Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Arthur Gutowski
On Wed, 21 Jan 2009 08:18:44 -0600, Mark Zelden 
mark.zel...@zurichna.com wrote:
[...]
There is no requirement for the directory
file system, but I've seen a shop's sysplex root grow because all of
the directories / mount points were getting created within the sysplex
root.  This eventually led to a sysplex wide IPL in order to re-create
a larger sysplex root.   Oh... make sure you are using a single
BPXPRMxx member if not already and pay attention to AUTOMOVE /
UNMOUNT in your MOUNT defintions.
[...]

Wow, that must have been some growth.  Either that or the sysplex root was 
defined with an exceedingly small primary extent and exceedingly small or no 
secondary extents.  It's been a while since I looked, but isn't the architected 
limit something like 123 extents or 59 volumes, whichever you hit first?

As for AUTOMOVE / UNMOUNT, not only do you have to pay attention in terms 
of inheritance (filesystems mounted off system root and the system root 
should be set the same), but it may also have implications for your 
clone/migration process.  I've found the need to add an unmount step to our 
clone JCL because we chose AUTOMOVE, and we have only once performed a 
sysplex-wide shutdown  IPL.

Regards,
Art Gutowski
Ford Motor Company

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 08:44:08 -0600, Arthur Gutowski aguto...@ford.com wrote:

On Wed, 21 Jan 2009 08:18:44 -0600, Mark Zelden
mark.zel...@zurichna.com wrote:
[...]
There is no requirement for the directory
file system, but I've seen a shop's sysplex root grow because all of
the directories / mount points were getting created within the sysplex
root.  This eventually led to a sysplex wide IPL in order to re-create
a larger sysplex root.   Oh... make sure you are using a single
BPXPRMxx member if not already and pay attention to AUTOMOVE /
UNMOUNT in your MOUNT defintions.
[...]

Wow, that must have been some growth.  Either that or the sysplex root was
defined with an exceedingly small primary extent and exceedingly small or no
secondary extents.  It's been a while since I looked, but isn't the architected
limit something like 123 extents or 59 volumes, whichever you hit first?

I can't go back and look since it was a former client of mine, but a few 
points:

1) It was probably created with the sample SYS1.SAMPLIB(BPXISYSR),
which has a space definition of 2 cyl and no secondary.  Even if they
added a secondary of 2, 250 cyls isn't much.

2) While you can add a secondary extent with CONFIGHFS, adding volumes
requires an unmount / remount of the HFS.  

3) A 3390-3 isn't all that big anyway.  


As for AUTOMOVE / UNMOUNT, not only do you have to pay attention in terms
of inheritance (filesystems mounted off system root and the system root
should be set the same), but it may also have implications for your
clone/migration process.  I've found the need to add an unmount step to our
clone JCL because we chose AUTOMOVE, and we have only once performed a
sysplex-wide shutdown  IPL.


Good point.  That isn't part of our JCL, but it is part of our procedure to 
check if the target sysres set being reused still has its unix files mounted
(the most likely answer is YES).   If it is, there is a rexx exec to unmount
them.   I suppose it makes sense to just add that as a step to the clone
JCL for those sysplexes.   

I can't do that in one of my other environments because we share the
sysres set across sysplexes and the cloning is done from a development
sysplex that has the maintenance unix files mounted.  We have to manually
go check in the 3 sysplexes (one sandbox) that has the shared file systems
and unmount the target set's files.  The sharing of resources between 2 
of those 3 sysplexes (prod/devl) goes back long before the existence of
PDSE and HFS (and the XCF / sysplex requirement for sharing) and they use 
MII for integrity. Thus the kludge.  

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Guy Gardoit
I'm having trouble understanding why an IPL'ed system's sysres and USS files
are being used for cloning.  I've always used a staging concept wherein
the SMP/E target sysres and USS file(s) for each running z/OS release are *
never* IPLed.   This relieves a lot of issues and provides a base, if you
will, for all maintenance and cloning.What is perceived as disadvantages
to using this type of staged clone process?   I've never had any issues
with it but I may just have been lucky.  TIA

Guy Gardoit
z/OS Systems Programming

On Wed, Jan 28, 2009 at 10:18 AM, Mark Zelden mark.zel...@zurichna.comwrote:

 On Wed, 28 Jan 2009 08:44:08 -0600, Arthur Gutowski aguto...@ford.com
 wrote:

 On Wed, 21 Jan 2009 08:18:44 -0600, Mark Zelden
 mark.zel...@zurichna.com wrote:
 [...]
 There is no requirement for the directory
 file system, but I've seen a shop's sysplex root grow because all of
 the directories / mount points were getting created within the sysplex
 root.  This eventually led to a sysplex wide IPL in order to re-create
 a larger sysplex root.   Oh... make sure you are using a single
 BPXPRMxx member if not already and pay attention to AUTOMOVE /
 UNMOUNT in your MOUNT defintions.
 [...]
 
 Wow, that must have been some growth.  Either that or the sysplex root was
 defined with an exceedingly small primary extent and exceedingly small or
 no
 secondary extents.  It's been a while since I looked, but isn't the
 architected
 limit something like 123 extents or 59 volumes, whichever you hit first?

 I can't go back and look since it was a former client of mine, but a few
 points:

 1) It was probably created with the sample SYS1.SAMPLIB(BPXISYSR),
 which has a space definition of 2 cyl and no secondary.  Even if they
 added a secondary of 2, 250 cyls isn't much.

 2) While you can add a secondary extent with CONFIGHFS, adding volumes
 requires an unmount / remount of the HFS.

 3) A 3390-3 isn't all that big anyway.

 
 As for AUTOMOVE / UNMOUNT, not only do you have to pay attention in terms
 of inheritance (filesystems mounted off system root and the system
 root
 should be set the same), but it may also have implications for your
 clone/migration process.  I've found the need to add an unmount step to
 our
 clone JCL because we chose AUTOMOVE, and we have only once performed a
 sysplex-wide shutdown  IPL.
 

 Good point.  That isn't part of our JCL, but it is part of our procedure to
 check if the target sysres set being reused still has its unix files
 mounted
 (the most likely answer is YES).   If it is, there is a rexx exec to
 unmount
 them.   I suppose it makes sense to just add that as a step to the clone
 JCL for those sysplexes.

 I can't do that in one of my other environments because we share the
 sysres set across sysplexes and the cloning is done from a development
 sysplex that has the maintenance unix files mounted.  We have to manually
 go check in the 3 sysplexes (one sandbox) that has the shared file systems
 and unmount the target set's files.  The sharing of resources between 2
 of those 3 sysplexes (prod/devl) goes back long before the existence of
 PDSE and HFS (and the XCF / sysplex requirement for sharing) and they use
 MII for integrity. Thus the kludge.

 Mark
 --
 Mark Zelden
 Sr. Software and Systems Architect - z/OS Team Lead
 Zurich North America / Farmers Insurance Group - ZFUS G-ITO
 mailto:mark.zel...@zurichna.com
 z/OS Systems Programming expert at
 http://expertanswercenter.techtarget.com/
 Mark's MVS Utilities: 
 http://home.flash.net/~mzelden/mvsutil.htmlhttp://home.flash.net/%7Emzelden/mvsutil.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 10:31:05 -0500, Guy Gardoit ggard...@gmail.com wrote:

I'm having trouble understanding why an IPL'ed system's sysres and USS files
are being used for cloning.  I've always used a staging concept wherein
the SMP/E target sysres and USS file(s) for each running z/OS release are *
never* IPLed.   This relieves a lot of issues and provides a base, if you
will, for all maintenance and cloning.What is perceived as disadvantages
to using this type of staged clone process?   I've never had any issues
with it but I may just have been lucky.  TIA

Because they stay mounted across IPLs in a shared file system - even 
though they may not be in use.

For example,  you are IPLed on set A, then you roll out set B. Half the
plex is using the A set of sysres files, half the B set.  Now you roll out
the C set.  The rest of the plex is now on B or C, but none are using
the A set.You need to clone a new set and A is available.  Unless
you had done a sysplex wide IPL, set A's files are still mounted in the
sysplex.

HTH,

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Guy Gardoit
Actually, they don't.  They are only mounted (to a /service' directory in
the sandbox system) when service is applied; unmounted when service
application is complete.  The way I do staging is to NEVER used the staging
resvol(s) nor $VERSION FS(s) files for an IPL'ed system.   They are only
used for cloning and nothing else.   There are no issues with quiescing any
staging FS since it is not mounted anywhere during the cloning process (one
simple job), there is a base resvol(s) that is NEVER IPled.

To make things clearer, bear with me while I give an example.  A sysplex has
two possible volumes to IPL from, say SYSRS1 or SYSRS2,  which implies two
possible VERSION ROOT filesystem(s), say OMVS.**.SYSRS1 or
OMVS.**.SYSRS2.   These volumes and $VERSION ROOT file systems (the ones NOT
IPLed from - which implies cloning cannot be done unless all systems in the
plex have been migrated to ONE sysres/$VERSION; IMO a reasonable and
desirable situation) are recreated from the staging resvol, say STGRES, and
staging $VERSION ROOT FS, say OMVE.**.STGRES.

The clone process uses the staging items as the source and the unused
sysres/$VERSION ROOT FS(s) as the destination.

Guy Gardoit
z/OS Systems Programming

On Wed, Jan 28, 2009 at 10:58 AM, Mark Zelden mark.zel...@zurichna.comwrote:

 On Wed, 28 Jan 2009 10:31:05 -0500, Guy Gardoit ggard...@gmail.com
 wrote:

 I'm having trouble understanding why an IPL'ed system's sysres and USS
 files
 are being used for cloning.  I've always used a staging concept wherein
 the SMP/E target sysres and USS file(s) for each running z/OS release are
 *
 never* IPLed.   This relieves a lot of issues and provides a base, if
 you
 will, for all maintenance and cloning.What is perceived as
 disadvantages
 to using this type of staged clone process?   I've never had any issues
 with it but I may just have been lucky.  TIA

 Because they stay mounted across IPLs in a shared file system - even
 though they may not be in use.

 For example,  you are IPLed on set A, then you roll out set B. Half the
 plex is using the A set of sysres files, half the B set.  Now you
 roll out
 the C set.  The rest of the plex is now on B or C, but none are using
 the A set.You need to clone a new set and A is available.  Unless
 you had done a sysplex wide IPL, set A's files are still mounted in the
 sysplex.

 HTH,

 Mark
 --
 Mark Zelden
 Sr. Software and Systems Architect - z/OS Team Lead
 Zurich North America / Farmers Insurance Group - ZFUS G-ITO
 mailto:mark.zel...@zurichna.com
 z/OS Systems Programming expert at
 http://expertanswercenter.techtarget.com/
 Mark's MVS Utilities: 
 http://home.flash.net/~mzelden/mvsutil.htmlhttp://home.flash.net/%7Emzelden/mvsutil.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 11:15:43 -0500, Guy Gardoit ggard...@gmail.com wrote:

Actually, they don't.  They are only mounted (to a /service' directory in
the sandbox system) when service is applied; unmounted when service
application is complete.  The way I do staging is to NEVER used the staging
resvol(s) nor $VERSION FS(s) files for an IPL'ed system.   They are only
used for cloning and nothing else.   There are no issues with quiescing any
staging FS since it is not mounted anywhere during the cloning process (one
simple job), there is a base resvol(s) that is NEVER IPled.

To make things clearer, bear with me while I give an example.  A sysplex has
two possible volumes to IPL from, say SYSRS1 or SYSRS2,  which implies two
possible VERSION ROOT filesystem(s), say OMVS.**.SYSRS1 or
OMVS.**.SYSRS2.   These volumes and $VERSION ROOT file systems (the ones NOT
IPLed from - which implies cloning cannot be done unless all systems in the
plex have been migrated to ONE sysres/$VERSION; IMO a reasonable and
desirable situation) are recreated from the staging resvol, say STGRES, and
staging $VERSION ROOT FS, say OMVE.**.STGRES.

The clone process uses the staging items as the source and the unused
sysres/$VERSION ROOT FS(s) as the destination.

Guy Gardoit
z/OS Systems Programming


I'm not sure I understand what you are saying. 

First, you say you won't roll out a new sysres set until all systems are
migrated to the same (most current) set. Correct?  That's true most 
of the time for us, but not always (oh, for a perfect world - that is why
we need at least 3 target sets).  Regardless, it has nothing to do 
with the problem at hand.  It can happen with just 2 sets.

Second, you say the clone process uses the unused sysres version root
as the target.That is exactly what we do.   The problem is, if you
do rolling IPLs, regardless of all systems being IPLed on the same version
root / files, even the unused version files (destination to use your 
terminology) will remain mounted unless you take overt action to 
unmount them.  

Do you always do sysplex wide IPLs?  What am I missing?

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Guy Gardoit
If all systems in a plex are IPLed from the new sysres-root combo, the old
FS are not mounted anywhere and can be deleted by the clone job. And,
yes, it does take a perfect world to get each sysplex to be on *one* set
of resvol(s) and one $VERSION root FS but I guess I have lived in a perfect
world because that was not a problem in our 4-parallel sysplex complex.

We don't always do sysplex-wide IPLs but roll out the new sysres-root combo
to all members of our plexes as soon as doable. Once all systems in a plex
are on the new set, the old set is not mounted anywhere.  I don't
understand why yours would be.  The next clone process does not take place
until that task is accomplished.   If an emergency fix is required, each
IPLable sysres has an associated SMP/E target zone to make that possible
but, of course, not desirable.

Hope this clears things up a bit.

Guy Gardoit
z/OS Systems Programming

On Wed, Jan 28, 2009 at 12:08 PM, Mark Zelden mark.zel...@zurichna.comwrote:

 On Wed, 28 Jan 2009 11:15:43 -0500, Guy Gardoit ggard...@gmail.com
 wrote:

 Actually, they don't.  They are only mounted (to a /service' directory in
 the sandbox system) when service is applied; unmounted when service
 application is complete.  The way I do staging is to NEVER used the
 staging
 resvol(s) nor $VERSION FS(s) files for an IPL'ed system.   They are only
 used for cloning and nothing else.   There are no issues with quiescing
 any
 staging FS since it is not mounted anywhere during the cloning process
 (one
 simple job), there is a base resvol(s) that is NEVER IPled.
 
 To make things clearer, bear with me while I give an example.  A sysplex
 has
 two possible volumes to IPL from, say SYSRS1 or SYSRS2,  which implies two
 possible VERSION ROOT filesystem(s), say OMVS.**.SYSRS1 or
 OMVS.**.SYSRS2.   These volumes and $VERSION ROOT file systems (the ones
 NOT
 IPLed from - which implies cloning cannot be done unless all systems in
 the
 plex have been migrated to ONE sysres/$VERSION; IMO a reasonable and
 desirable situation) are recreated from the staging resvol, say STGRES,
 and
 staging $VERSION ROOT FS, say OMVE.**.STGRES.
 
 The clone process uses the staging items as the source and the unused
 sysres/$VERSION ROOT FS(s) as the destination.
 
 Guy Gardoit
 z/OS Systems Programming
 

 I'm not sure I understand what you are saying.

 First, you say you won't roll out a new sysres set until all systems are
 migrated to the same (most current) set. Correct?  That's true most
 of the time for us, but not always (oh, for a perfect world - that is why
 we need at least 3 target sets).  Regardless, it has nothing to do
 with the problem at hand.  It can happen with just 2 sets.

 Second, you say the clone process uses the unused sysres version root
 as the target.That is exactly what we do.   The problem is, if you
 do rolling IPLs, regardless of all systems being IPLed on the same version
 root / files, even the unused version files (destination to use your
 terminology) will remain mounted unless you take overt action to
 unmount them.

 Do you always do sysplex wide IPLs?  What am I missing?

 Mark
 --
 Mark Zelden
 Sr. Software and Systems Architect - z/OS Team Lead
 Zurich North America / Farmers Insurance Group - ZFUS G-ITO
 mailto:mark.zel...@zurichna.com
 z/OS Systems Programming expert at
 http://expertanswercenter.techtarget.com/
 Mark's MVS Utilities: 
 http://home.flash.net/~mzelden/mvsutil.htmlhttp://home.flash.net/%7Emzelden/mvsutil.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 12:52:49 -0500, Guy Gardoit ggard...@gmail.com wrote:


We don't always do sysplex-wide IPLs but roll out the new sysres-root combo
to all members of our plexes as soon as doable. Once all systems in a plex
are on the new set, the old set is not mounted anywhere.  I don't
understand why yours would be.  

And I don't understand how yours aren't.  :-)   Don't you use AUTOMOVE
for your version files?  Without overt action to unmount them, they would
move to another system forever if you always do rolling IPLs.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Guy Gardoit
Eh!  I do apologize.   The first step in the clone job performs a Rexx exec
that ensures no system in the plex is running from the target resvol and
also dis-mounts the associated FS.   So routine that I forgot what the job
does!

Guy Gardoit
z/OS Systems Programming

On Wed, Jan 28, 2009 at 1:13 PM, Mark Zelden mark.zel...@zurichna.comwrote:

 On Wed, 28 Jan 2009 12:52:49 -0500, Guy Gardoit ggard...@gmail.com
 wrote:


 We don't always do sysplex-wide IPLs but roll out the new sysres-root
 combo
 to all members of our plexes as soon as doable. Once all systems in a
 plex
 are on the new set, the old set is not mounted anywhere.  I don't
 understand why yours would be.

 And I don't understand how yours aren't.  :-)   Don't you use AUTOMOVE
 for your version files?  Without overt action to unmount them, they would
 move to another system forever if you always do rolling IPLs.

 Mark
 --
 Mark Zelden
 Sr. Software and Systems Architect - z/OS Team Lead
 Zurich North America / Farmers Insurance Group - ZFUS G-ITO
 mailto:mark.zel...@zurichna.com
 z/OS Systems Programming expert at
 http://expertanswercenter.techtarget.com/
 Mark's MVS Utilities: 
 http://home.flash.net/~mzelden/mvsutil.htmlhttp://home.flash.net/%7Emzelden/mvsutil.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-28 Thread Mark Zelden
On Wed, 28 Jan 2009 13:40:14 -0500, Guy Gardoit ggard...@gmail.com wrote:

Eh!  I do apologize.   The first step in the clone job performs a Rexx exec
that ensures no system in the plex is running from the target resvol and
also dis-mounts the associated FS.   So routine that I forgot what the job
does!

Guy Gardoit
z/OS Systems Programming

Ahhh... glad you found it.  I hate mysteries in data processing.  So you are
basically doing the same thing as Art is.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


USS file sharing in z/OS - Version Ugrade

2009-01-21 Thread Cwi Jeret
We have Implemented Shared HFS in Our Sysplex with 4 Members , 
a few months ago.

Now, we want to upgrade the SYSTEM of 1 of the 4 members of 
the sysplex with an Updated ROOT ,as part of Maintenace service for that 
system , but leaving the 3 other members in the old state.

As reccomended in UNIX System Services Planning in Chapter :
7.6.2 Adding a system-specific or version root file system to your shared file 
system configuration
we added the new ROOT as a new Version to the file system .
This new root will be the default root for the upgraded system after IPL .

The problem is that File Systems mounted currently with their Mountpoint on 
the Old Root , prevent these File Systems to mount their Mountpoint on the 
New Root for the Upgraded SYSTEM.
In this situation we cannot run the Applications running these File systems
on the Upgraded member of the SYSPLEX

Does anyone know a solution for this problem ?

Cwi Jeret 
Bank-Hapoalim T.A
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-21 Thread Veilleux, Jon L
We use symbolic links to point to any of the files that are not
distributed as part of the root. This allows us the flexibility to share
files between system roots. When we first build the system root, we add
a /product symbolic link to /usr/lpp which points to a separately
mounted directory (under the system root) where we create subdirectories
to mount all of our products.
There are a couple of SHARE presentations on how to accomplish this.
Feel free to contact me off list if you want more information.
Jon


Jon L. Veilleux 
veilleu...@aetna.com 
(860) 636-2683 


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Cwi Jeret
Sent: Wednesday, January 21, 2009 5:12 AM
To: IBM-MAIN@bama.ua.edu
Subject: USS file sharing in z/OS - Version Ugrade

We have Implemented Shared HFS in Our Sysplex with 4 Members , a few
months ago.

Now, we want to upgrade the SYSTEM of 1 of the 4 members of the sysplex
with an Updated ROOT ,as part of Maintenace service for that system ,
but leaving the 3 other members in the old state.

As reccomended in UNIX System Services Planning in Chapter :
7.6.2 Adding a system-specific or version root file system to your
shared file system configuration
we added the new ROOT as a new Version to the file system .
This new root will be the default root for the upgraded system after IPL
..

The problem is that File Systems mounted currently with their Mountpoint
on the Old Root , prevent these File Systems to mount their Mountpoint
on the New Root for the Upgraded SYSTEM.
In this situation we cannot run the Applications running these File
systems on the Upgraded member of the SYSPLEX

Does anyone know a solution for this problem ?

Cwi Jeret
Bank-Hapoalim T.A
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
This e-mail may contain confidential or privileged information. If
you think you have received this e-mail in error, please advise the
sender by reply e-mail and then delete this e-mail immediately.
Thank you. Aetna   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS - Version Ugrade

2009-01-21 Thread Mark Zelden
On Wed, 21 Jan 2009 04:12:01 -0600, Cwi Jeret cwi_je...@yahoo.com wrote:

We have Implemented Shared HFS in Our Sysplex with 4 Members ,
a few months ago.

Now, we want to upgrade the SYSTEM of 1 of the 4 members of
the sysplex with an Updated ROOT ,as part of Maintenace service for that
system , but leaving the 3 other members in the old state.

As reccomended in UNIX System Services Planning in Chapter :
7.6.2 Adding a system-specific or version root file system to your shared file
system configuration
we added the new ROOT as a new Version to the file system .
This new root will be the default root for the upgraded system after IPL .

The problem is that File Systems mounted currently with their Mountpoint on
the Old Root , prevent these File Systems to mount their Mountpoint on the
New Root for the Upgraded SYSTEM.
In this situation we cannot run the Applications running these File systems
on the Upgraded member of the SYSPLEX

Does anyone know a solution for this problem ?


Yes.  Nothing should be mounted off of you version root that is not part
of the OS.  There are 2 scenarios.

For things you share you  mount a directory file system off of your
sysplex root (for shared file systems) and then mount other file 
systems off of that one.  There is no requirement for the directory 
file system, but I've seen a shop's sysplex root grow because all of
the directories / mount points were getting created within the sysplex
root.  This eventually led to a sysplex wide IPL in order to re-create
a larger sysplex root.   Oh... make sure you are using a single
BPXPRMxx member if not already and pay attention to AUTOMOVE / 
UNMOUNT in your MOUNT defintions.  

In the case of something that is in the root that you don't share, 
you do the same sort of thing with symbolic links that is documented
for setting up CRON with a read only root (see Unix System Services
Planning).  This needs to be done each time you do an OS
upgrade (have a new root distributed with ServerPac).

For example, lets say each system has some unique file system
needed to be mounted at  /usr/local (which is in the OS / version root). 
 
1) Create /etc/local  (/etc is chosen because it is already a system
specific file system)

2) Create a symbolic link for /usr/local that points to /etc/local 
 .. so if your maintenance OS / version root is mounted 
at /service

rm -fr /service/usr/local   
ln -s /etc/local /service/usr/local 

3) You need to IPL with the new root that has the symbolic LINK
Mount your file system at MOUNTPOINT('/SYSNAME./etc/local') 
and use the UNMOUNT parm. 

Hope this helps.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: USS file sharing in z/OS

2008-07-01 Thread Arthur Gutowski
Andrew,

We have a similar mixed environment.  We approach it probably as Mark does.  
SYSPLEX(YES) on all systems that use the shared SYSPLEX ROOT on the small 
subset of shared DASD (with CDS' and a PARMLIB).  We had one pre-existing 
sysplex that fully shares its DASD.  The rest of the previously monoplex 
systems do not share with this sysplex or with each other (yet).  We have 
one more sysplex merging in on 7/20.  It works, and it's not complicated.

SYSPLEX(YES) only requires a common SYSPLEX ROOT within the Sysplex that 
all systems can get to.  Multiple HFSPlexes is not allowed within a Sysplex, 
but there are no rules (that I have found) that state you cannot limit the 
degree of filesystem sharing you exploit, other than the ROOT.

We did get caught by mountpoints and symbolic links for a couple of 
applications that were differrent on our monoplex systems.  Once we 
reconciled the directory structure and BPXPRM mountpoints, we were fine.  
You may have to reconcile your automount policies, too.

The Merging Systems into a Sysplex redbook, SG24-6818 is another fine 
reference for HFS and other components.

HTH,
Art Gutowski
Ford Motor Company ITInfrastructure

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-07-01 Thread Mark Zelden
On Tue, 1 Jul 2008 09:03:57 -0500, Arthur Gutowski [EMAIL PROTECTED] wrote:


SYSPLEX(YES) only requires a common SYSPLEX ROOT within the Sysplex that
all systems can get to. 

To clarify.. all systems that will run with SYSPLEX(YES).  In the OP's case,
it is a subset of the sysplex.  Other systems obviously can't share BPXPRMxx
and must run with SYSPLEX(NO). 

 Multiple HFSPlexes is not allowed within a Sysplex,

Correct.  All systems don't have to participate in the sharing environment,
but you can't have more than one per sysplex.

but there are no rules (that I have found) that state you cannot limit the
degree of filesystem sharing you exploit, other than the ROOT.

All the sysres file systems will be shared in the sysplex by all systems
sharing the same sysres set (read only) **.   For the systems participating
in the shared file system configuration, all HFS/zFS files will be shared,
whether you like it or not.

** not entirely true... for example, if one of your non-sharing systems
doesn't use TSM for example, you wouldn't have to mount its HFS/zFS.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-07-01 Thread Rob Schramm
Personally, I have always viewed the single HFSPlex as a deficiency. There 
are so few items in z/OS that only allow one and present themselves as a 
single point of failure.  I would like it much better if it allowed for 
multiple within a plex... similar to running multiple JES2's where you 
supply a plex name to allow them to co-exist.  That way I could have one 
for tech systems, one for development and one for production.  In the face 
of something unexpectedly bad happening.. then the scope of impact is 
contained. 

Just my 2 cents.

Rob Schramm
Sirius Computer Solutions

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-06-30 Thread Andrew Metcalfe
I have a slightly different requirement, and I have come to the conclusion that 
I can’t meet it.

I have a user that wants to share zFSes between a subset of systems within 
a 13-way parallel sysplex. 
Not all dasd is shared between all members of the sysplex. We merged 3 
sysplexes into one and only share the dasd necessary to run the plex e.g. 
CDS, logger, MII etc. We are in the process of converting to a full sysplex 
where all resources will be available, but this will take some time (sometime 
never!).

As all systems do not share the dasd, I cannot move to SYSPLEX(YES) type 
file sharing. I realise that I could share the files if they were mounted R/O, 
but 
I know the application wants to R/W. 
Does anyone have any clever suggestions?

Thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-06-30 Thread Mark Zelden
On Mon, 30 Jun 2008 10:42:02 -0500, Andrew Metcalfe
[EMAIL PROTECTED] wrote:

I have a slightly different requirement, and I have come to the conclusion that
I can’t meet it.

I have a user that wants to share zFSes between a subset of systems within
a 13-way parallel sysplex.
Not all dasd is shared between all members of the sysplex. We merged 3
sysplexes into one and only share the dasd necessary to run the plex e.g.
CDS, logger, MII etc. We are in the process of converting to a full sysplex
where all resources will be available, but this will take some time (sometime
never!).

As all systems do not share the dasd, I cannot move to SYSPLEX(YES) type
file sharing. I realise that I could share the files if they were mounted
R/O, but
I know the application wants to R/W.
Does anyone have any clever suggestions?


Not all systems have to participate in the shared filesystem environment. 
It can be a subset of your sysplex.  

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-06-30 Thread McKown, John
 -Original Message-
 From: IBM Mainframe Discussion List 
 [mailto:[EMAIL PROTECTED] On Behalf Of Andrew Metcalfe
 Sent: Monday, June 30, 2008 10:42 AM
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: Re: USS file sharing in z/OS
 
 I have a slightly different requirement, and I have come to 
 the conclusion that 
 I can't meet it.
 
 I have a user that wants to share zFSes between a subset of 
 systems within 
 a 13-way parallel sysplex. 
 Not all dasd is shared between all members of the sysplex. We 
 merged 3 
 sysplexes into one and only share the dasd necessary to run 
 the plex e.g. 
 CDS, logger, MII etc. We are in the process of converting to 
 a full sysplex 
 where all resources will be available, but this will take 
 some time (sometime 
 never!).
 
 As all systems do not share the dasd, I cannot move to 
 SYSPLEX(YES) type 
 file sharing. I realise that I could share the files if they 
 were mounted R/O, but 
 I know the application wants to R/W. 
 Does anyone have any clever suggestions?
 
 Thanks

NFS. A bit complicated, but workable. If you have hipersockets between
the z/OS images, it should be quite fast.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-06-30 Thread Andrew Metcalfe
. but I guess that all systems in the subset have to be converted to have 
the new file systems e.g. sysplex root and version? and the sysplex root has 
to be AUTOMOVE. Whilst this may work technically, I feel that I might be 
buying into a whole stack of trouble!

This will be difficult to implement given our standard build where we install 
once and ship many standard z/OS systems based on it. As none of us are 
getting any younger, I am trying to reduce complexity rather than introduce 
some bespoke processing that will trip up our succesors!

Thanks

Andrew

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-06-30 Thread Mark Zelden
On Mon, 30 Jun 2008 12:15:00 -0500, Andrew Metcalfe
[EMAIL PROTECTED] wrote:

.. but I guess that all systems in the subset have to be converted to have
the new file systems e.g. sysplex root and version? 

Not true.   


and the sysplex root has
to be AUTOMOVE. 

Yes, for the participating systems.

Whilst this may work technically, I feel that I might be
buying into a whole stack of trouble!


Not if you implement it properly. 

This will be difficult to implement given our standard build where we install
once and ship many standard z/OS systems based on it. As none of us are
getting any younger, I am trying to reduce complexity rather than introduce
some bespoke processing that will trip up our succesors!


I do the same thing.   I even share the same physical sysres set between
sysplexes that have shared file systems and ones that don't (including 
some monplexes).   Although I don't have any sysplexes that only do 
partial sharing... but I may be doing that in the near future - at least
temporarily in order to satisfy a sharing requirement.

Have you read the doc?
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/BPXZB271/8.0?SHELF=E0Z2IN71DT=20070122154602


Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-06-30 Thread Steve Comstock

Andrew Metcalfe wrote:

[snip]

As none of us are getting any younger, 
I am trying to reduce complexity rather than introduce 
some bespoke processing that will trip up our succesors!




And, of course, your company is preparing for the departure
of the older heads by training the younger heads, to guarantee
some continuity of maintaining and enhancing the applications
that keep the company running successfully.

Right?



Thanks

Andrew



Kind regards,

-Steve Comstock
The Trainer's Friend, Inc.

303-393-8716
http://www.trainersfriend.com

  z/OS Application development made easier
* Our classes include
   + How things work
   + Programming examples with realistic applications
   + Starter / skeleton code
   + Complete working programs
   + Useful utilities and subroutines
   + Tips and techniques

== Check out the Trainer's Friend Store to purchase z/OS  ==
== application developer toolkits. Sample code in four==
== programming languages, JCL to Assemble or compile, ==
== bind and test. ==
==   http://www.trainersfriend.com/TTFStore/index.html==

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-06-30 Thread Rick Fochtman

snip---
And, of course, your company is preparing for the departure of the older 
heads by training the younger heads, to guarantee some continuity of 
maintaining and enhancing the applications that keep the company running 
successfully.

---unsnip
Right. Did you notice that pig fly by outside your window? Or the pink 
elephant that landed on your roof last night?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-28 Thread McKown, John
 -Original Message-
 From: IBM Mainframe Discussion List 
 [mailto:[EMAIL PROTECTED] On Behalf Of Mark Zelden
 Sent: Tuesday, May 27, 2008 3:31 PM
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: Re: USS file sharing in z/OS
[snip]
 
 Except for read only requests (even when mounted R/W).  IIRC, 
 those are
 handled from the local system regardless of the file system owner.
 
 Mark
 --
 Mark Zelden

Mark,

Hum, that seems weird to me. Unless the owning system always hardens
the data to disk immediately. Which it may well do. I am more used to
normal UNIX systems which only harden data to disk periodically
(fsync). Most UNIX systems regard I/O as a necessary evil and buffer the
bleep out of it for performance reasons. However, z/OS UNIX may not do
this. Thanks to OCO, I will likely never know this as it is not
documented.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-28 Thread Richard Bond
Mike,

As I understand it (doc is sketchy on this) from prior experience, if a PFS 
like HFS or zFS is monted R/W on one or more of the systems in the sysplex, 
then all requests, read and write, from non-owning systems are function shipped 
to the owner.  I.E., they auto-magically become sysplex-unaware.  If 
mounted R/O on all systems sharing the FS in the plex, then the FS is 
sysplex-aware and read access is done locally (if the system has access to 
the DASD volume that the FS resides on).   All this is done for you based on 
values stored in the OMVS CDS at mount time. 

Distributed FS, like NFS, can be sysplex-aware even if mounted R/W on all 
systems in the plex.

Of course, mount attributes (AUTOMOVE/NOAUTOMOVE) adds some fun to all of this 
but letting the attribute default will usually work.

So, to answer your question, yes, except for 
/quote
when the application in the read-only LPAR attempted to perform a write?
/unquote

If the FS is mounted R/O to a particular system, then that system can only read 
it.  Solution would be to mount R/W to all LPARs that need it.   The 
sysplex-root is an example of such a FS.

HTH

RB

 Mike Myers [EMAIL PROTECTED] 5/27/2008 3:36 PM 

Richard:

What you are saying is interesting. I am in the middle of reading the process 
of setting up HFS file sharing. Are you saying that I can use XCF signaling to 
function  ship requests from an LPAR with read-only access to an HFS file to 
another LPAR that has read/write access to the same HFS files? Would this XCF 
request be automatically made when the application in the read-only LPAR 
attempted to perform a write? Sounds even easier, if I understand you correctly.

Mike

 Richard Bond [EMAIL PROTECTED] 5/27/2008 3:06 PM 
Access thru XCF depends on whether the file-system is sysplex-aware.  If so, 
(like a R/O HFS) then access is local unless the requesting system does not 
have DASD access. In that case, XCF to owning system is required.

The  R/O VERSION  ROOT FS should have local access to all sharing systems - 
the owner doesn't matter.

 Edward Jaffe [EMAIL PROTECTED] 5/27/2008 2:42 PM 

Mark Zelden wrote:
 A properly configured shared file system in either a basic or parallel 
 sysplex.

 Have a look at the chapter on sharing file systems in a sysplex in the
 UNIX System Services Planning manual.   I think the doc is pretty good. 
 If you have questions after that, let us know.
   

Exactly. There is no need whatsoever for parallel sysplex. Both zFS and 
HFS use USS file sharing support, which uses only XCF signaling and a 
couple data set ... no XES. Requests are function shipped to the file 
system owning system, so you get better performance on the image where 
the file system is mounted.

-- 
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED] 
http://www.phoenixsoftware.com/ 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 
==
CONFIDENTIALITY NOTICE: This email contains information from the sender that 
may be CONFIDENTIAL, LEGALLY PRIVILEGED, PROPRIETARY or otherwise protected 
from disclosure. This email is intended for use only by the person or entity to 
whom it is addressed. If you are not the intended recipient, any use, 
disclosure, copying, distribution, printing, or any action taken in reliance on 
the contents of this email, is strictly prohibited. If you received this email 
in error, please contact the sending party by reply email, delete the email 
from your computer system and shred any paper copies.

Note to Patients: There are a number of risks you should consider before using 
e-mail to communicate with us. See our Privacy Policy and Henry Ford My Health 
at www.henryford.com for more detailed information. If you do not believe that 
our policy gives you the privacy and security protection you need, do not send 
e-mail or Internet communications to us.

==

--
The contents of this e-mail (and any attachments) are confidential, may be 
privileged and may contain copyright material. You may only reproduce or 
distribute material if you are expressly authorized by us to do so. If you are 
not the intended recipient, any use, disclosure or copying of this email (and 
any attachments) is unauthorized. If you have received this e-mail in error, 
please notify the sender and immediately delete this e-mail and any copies of 
it from your system.
==


USS file sharing in z/OS

2008-05-27 Thread Mike Myers
Hi:
 
I seem to recall seeing some restrictions on the ability to share USS files 
between LPARs in a basic sysplex, although I don't remember where I saw the 
reference. 
 
My question is this. We have a basic sysplex here and our primary application 
uses the DB2 Text Extender, which employs the z/OS Text Search facility. It 
looks as if all of Text Search is embodied in USS HFS files. Looking at the USS 
parameters for these files, I see they are mounted RDWR. We are upgrading from 
z/OS v1.r4 to z/OS v1.r7 and I would like to test the application that uses the 
functions on v1.r7 before putting v1.r7 into production. What is required to 
protect USS files (if shared) against concurrent update corruption? And, can a 
basic sysplex (as opposed to a parallel sysplex) manage that?   
 
For additional background, we have two separate applications, one production 
and one test. Normally, they both run on the same LPAR. What I want to do is 
run the production version of the product on the production LPAR and the test 
version of the product on the test LPAR at the same time. There is a day end 
process where data from the production system is updated to the test system 
through a series of batch jobs. If there is a potential issue with concurrent 
update attempts between LPARs damaging the USS files, I want to find another 
way of testing the product. 
 
Mike Myers
Pitt County Memorial Hospital
Greenville, NC

--
The contents of this e-mail (and any attachments) are confidential, may be 
privileged and may contain copyright material. You may only reproduce or 
distribute material if you are expressly authorized by us to do so. If you are 
not the intended recipient, any use, disclosure or copying of this email (and 
any attachments) is unauthorized. If you have received this e-mail in error, 
please notify the sender and immediately delete this e-mail and any copies of 
it from your system.
==

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Mark Zelden
On Tue, 27 May 2008 13:59:46 -0400, Mike Myers [EMAIL PROTECTED] wrote:

Hi:
 
I seem to recall seeing some restrictions on the ability to share USS files
between LPARs in a basic sysplex, although I don't remember where I saw the
reference. 
 

Perhaps in a recent thread where there seemed to be some confusion.

snip

What is required to protect USS files (if shared) against concurrent update
corruption? And, can a basic sysplex (as opposed to a parallel sysplex)
manage that?   
 

A properly configured shared file system in either a basic or parallel sysplex.

Have a look at the chapter on sharing file systems in a sysplex in the
UNIX System Services Planning manual.   I think the doc is pretty good. 
If you have questions after that, let us know.

Regards,

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Mike Myers
Mark:
 
Thanks. I'll take a look. 
 
Mike


 Mark Zelden [EMAIL PROTECTED] 5/27/2008 2:30 PM 
On Tue, 27 May 2008 13:59:46 -0400, Mike Myers [EMAIL PROTECTED] wrote:

Hi:
 
I seem to recall seeing some restrictions on the ability to share USS files
between LPARs in a basic sysplex, although I don't remember where I saw the
reference. 
 

Perhaps in a recent thread where there seemed to be some confusion.

snip

What is required to protect USS files (if shared) against concurrent update
corruption? And, can a basic sysplex (as opposed to a parallel sysplex)
manage that?   
 

A properly configured shared file system in either a basic or parallel sysplex.

Have a look at the chapter on sharing file systems in a sysplex in the
UNIX System Services Planning manual.   I think the doc is pretty good. 
If you have questions after that, let us know.

Regards,

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED] 
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ 
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



--
The contents of this e-mail (and any attachments) are confidential, may be 
privileged and may contain copyright material. You may only reproduce or 
distribute material if you are expressly authorized by us to do so. If you are 
not the intended recipient, any use, disclosure or copying of this email (and 
any attachments) is unauthorized. If you have received this e-mail in error, 
please notify the sender and immediately delete this e-mail and any copies of 
it from your system.
==

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread McKown, John
 -Original Message-
 From: IBM Mainframe Discussion List 
 [mailto:[EMAIL PROTECTED] On Behalf Of Mike Myers
 Sent: Tuesday, May 27, 2008 1:00 PM
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: USS file sharing in z/OS
 
 Hi:
  
 I seem to recall seeing some restrictions on the ability to 
 share USS files between LPARs in a basic sysplex, although I 
 don't remember where I saw the reference. 
  
 My question is this. We have a basic sysplex here and our 
 primary application uses the DB2 Text Extender, which employs 
 the z/OS Text Search facility. It looks as if all of Text 
 Search is embodied in USS HFS files. Looking at the USS 
 parameters for these files, I see they are mounted RDWR. We 
 are upgrading from z/OS v1.r4 to z/OS v1.r7 and I would like 
 to test the application that uses the functions on v1.r7 
 before putting v1.r7 into production. What is required to 
 protect USS files (if shared) against concurrent update 
 corruption? And, can a basic sysplex (as opposed to a 
 parallel sysplex) manage that?   
  
 For additional background, we have two separate applications, 
 one production and one test. Normally, they both run on the 
 same LPAR. What I want to do is run the production version of 
 the product on the production LPAR and the test version of 
 the product on the test LPAR at the same time. There is a day 
 end process where data from the production system is updated 
 to the test system through a series of batch jobs. If there 
 is a potential issue with concurrent update attempts between 
 LPARs damaging the USS files, I want to find another way of 
 testing the product. 
  
 Mike Myers

Mike,

I've been looking at this recently. I'm still trying to get my head
together around all the parts. Granted, probably not too difficult, once
done and understood. As a possible alternative, if you are only going to
do a small amount of sharing, without too much activity, then NFS over
Hipersockets might be faster to implement. Then again, I may think so
because I've implemented NFS myself and so it seems not too hard to do.
As my Russian teacher would always say Easy! Easy!, but then she was a
Russian! GRIN

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Edward Jaffe

Mark Zelden wrote:

A properly configured shared file system in either a basic or parallel sysplex.

Have a look at the chapter on sharing file systems in a sysplex in the
UNIX System Services Planning manual.   I think the doc is pretty good. 
If you have questions after that, let us know.
  


Exactly. There is no need whatsoever for parallel sysplex. Both zFS and 
HFS use USS file sharing support, which uses only XCF signaling and a 
couple data set ... no XES. Requests are function shipped to the file 
system owning system, so you get better performance on the image where 
the file system is mounted.


--
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED]
http://www.phoenixsoftware.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Mike Myers
Ed:
 
Sounds good to me. I am presently reading the File Sharing chapter in the USS 
Planning manual. Sounds like I am going to like this one. 
 
Mike


 Edward Jaffe [EMAIL PROTECTED] 5/27/2008 2:42 PM 
Mark Zelden wrote:
 A properly configured shared file system in either a basic or parallel 
 sysplex.

 Have a look at the chapter on sharing file systems in a sysplex in the
 UNIX System Services Planning manual.   I think the doc is pretty good. 
 If you have questions after that, let us know.
   

Exactly. There is no need whatsoever for parallel sysplex. Both zFS and 
HFS use USS file sharing support, which uses only XCF signaling and a 
couple data set ... no XES. Requests are function shipped to the file 
system owning system, so you get better performance on the image where 
the file system is mounted.

-- 
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED] 
http://www.phoenixsoftware.com/ 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



--
The contents of this e-mail (and any attachments) are confidential, may be 
privileged and may contain copyright material. You may only reproduce or 
distribute material if you are expressly authorized by us to do so. If you are 
not the intended recipient, any use, disclosure or copying of this email (and 
any attachments) is unauthorized. If you have received this e-mail in error, 
please notify the sender and immediately delete this e-mail and any copies of 
it from your system.
==

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Jousma, David
The simplest form of sharing is to mount them to both systems READ only,
if the app can support that. No muss, no fuss.  Just remember, you can't
make any changes then.


___

Dave Jousma
Assistant Vice President
Mainframe Services
[EMAIL PROTECTED]
616.653.8429


-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Mike Myers
Sent: Tuesday, May 27, 2008 2:47 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: USS file sharing in z/OS

Ed:
 
Sounds good to me. I am presently reading the File Sharing chapter in
the USS Planning manual. Sounds like I am going to like this one. 
 
Mike

NFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


This e-mail transmission contains information that is confidential and may be 
privileged.   It is intended only for the addressee(s) named above. If you 
receive this e-mail in error, please do not read, copy or disseminate it in any 
manner. If you are not the intended recipient, any disclosure, copying, 
distribution or use of the contents of this information is prohibited. Please 
reply to the message immediately by informing the sender that the message was 
misdirected. After replying, please erase it from your computer system. Your 
assistance in correcting this error is appreciated.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Mike Myers
Dave:
 
That sounds simple, of course. I will have to find out from the product vendor 
what USS files, if any they need to update. If none, then I could use your 
proposed solution. 
 
Thanks
 
Mike

 Jousma, David [EMAIL PROTECTED] 5/27/2008 2:53 PM 
The simplest form of sharing is to mount them to both systems READ only,
if the app can support that. No muss, no fuss.  Just remember, you can't
make any changes then.


___

Dave Jousma
Assistant Vice President
Mainframe Services
[EMAIL PROTECTED] 
616.653.8429


-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Mike Myers
Sent: Tuesday, May 27, 2008 2:47 PM
To: IBM-MAIN@BAMA.UA.EDU 
Subject: Re: USS file sharing in z/OS

Ed:

Sounds good to me. I am presently reading the File Sharing chapter in
the USS Planning manual. Sounds like I am going to like this one. 

Mike

NFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 


This e-mail transmission contains information that is confidential and may be 
privileged.   It is intended only for the addressee(s) named above. If you 
receive this e-mail in error, please do not read, copy or disseminate it in any 
manner. If you are not the intended recipient, any disclosure, copying, 
distribution or use of the contents of this information is prohibited. Please 
reply to the message immediately by informing the sender that the message was 
misdirected. After replying, please erase it from your computer system. Your 
assistance in correcting this error is appreciated.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



--
The contents of this e-mail (and any attachments) are confidential, may be 
privileged and may contain copyright material. You may only reproduce or 
distribute material if you are expressly authorized by us to do so. If you are 
not the intended recipient, any use, disclosure or copying of this email (and 
any attachments) is unauthorized. If you have received this e-mail in error, 
please notify the sender and immediately delete this e-mail and any copies of 
it from your system.
==

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Richard Bond
Access thru XCF depends on whether the file-system is sysplex-aware.  If so, 
(like a R/O HFS) then access is local unless the requesting system does not 
have DASD access. In that case, XCF to owning system is required.

The  R/O VERSION  ROOT FS should have local access to all sharing systems - 
the owner doesn't matter.

 Edward Jaffe [EMAIL PROTECTED] 5/27/2008 2:42 PM 

Mark Zelden wrote:
 A properly configured shared file system in either a basic or parallel 
 sysplex.

 Have a look at the chapter on sharing file systems in a sysplex in the
 UNIX System Services Planning manual.   I think the doc is pretty good. 
 If you have questions after that, let us know.
   

Exactly. There is no need whatsoever for parallel sysplex. Both zFS and 
HFS use USS file sharing support, which uses only XCF signaling and a 
couple data set ... no XES. Requests are function shipped to the file 
system owning system, so you get better performance on the image where 
the file system is mounted.

-- 
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED]
http://www.phoenixsoftware.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
==
CONFIDENTIALITY NOTICE: This email contains information from the sender that 
may be CONFIDENTIAL, LEGALLY PRIVILEGED, PROPRIETARY or otherwise protected 
from disclosure. This email is intended for use only by the person or entity to 
whom it is addressed. If you are not the intended recipient, any use, 
disclosure, copying, distribution, printing, or any action taken in reliance on 
the contents of this email, is strictly prohibited. If you received this email 
in error, please contact the sending party by reply email, delete the email 
from your computer system and shred any paper copies.
 
Note to Patients: There are a number of risks you should consider before using 
e-mail to communicate with us. See our Privacy Policy and Henry Ford My Health 
at www.henryford.com for more detailed information. If you do not believe that 
our policy gives you the privacy and security protection you need, do not send 
e-mail or Internet communications to us.

==


Re: USS file sharing in z/OS

2008-05-27 Thread Edward Jaffe

Richard Bond wrote:

Access thru XCF depends on whether the file-system is sysplex-aware.  If so, 
(like a R/O HFS) then access is local unless the requesting system does not have DASD 
access. In that case, XCF to owning system is required.

The  R/O VERSION  ROOT FS should have local access to all sharing systems - 
the owner doesn't matter.
  


Good point! Read-only access does not require function shipping. Only 
read/write access requires that. (I should have said that.  :-[ )


--
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED]
http://www.phoenixsoftware.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Mike Myers
Richard:
 
What you are saying is interesting. I am in the middle of reading the process 
of setting up HFS file sharing. Are you saying that I can use XCF signaling to 
function  ship requests from an LPAR with read-only access to an HFS file to 
another LPAR that has read/write access to the same HFS files? Would this XCF 
request be automatically made when the application in the read-only LPAR 
attempted to perform a write? Sounds even easier, if I understand you correctly.
 
Mike

 Richard Bond [EMAIL PROTECTED] 5/27/2008 3:06 PM 
Access thru XCF depends on whether the file-system is sysplex-aware.  If so, 
(like a R/O HFS) then access is local unless the requesting system does not 
have DASD access. In that case, XCF to owning system is required.

The  R/O VERSION  ROOT FS should have local access to all sharing systems - 
the owner doesn't matter.

 Edward Jaffe [EMAIL PROTECTED] 5/27/2008 2:42 PM 

Mark Zelden wrote:
 A properly configured shared file system in either a basic or parallel 
 sysplex.

 Have a look at the chapter on sharing file systems in a sysplex in the
 UNIX System Services Planning manual.   I think the doc is pretty good. 
 If you have questions after that, let us know.
   

Exactly. There is no need whatsoever for parallel sysplex. Both zFS and 
HFS use USS file sharing support, which uses only XCF signaling and a 
couple data set ... no XES. Requests are function shipped to the file 
system owning system, so you get better performance on the image where 
the file system is mounted.

-- 
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED] 
http://www.phoenixsoftware.com/ 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 
==
CONFIDENTIALITY NOTICE: This email contains information from the sender that 
may be CONFIDENTIAL, LEGALLY PRIVILEGED, PROPRIETARY or otherwise protected 
from disclosure. This email is intended for use only by the person or entity to 
whom it is addressed. If you are not the intended recipient, any use, 
disclosure, copying, distribution, printing, or any action taken in reliance on 
the contents of this email, is strictly prohibited. If you received this email 
in error, please contact the sending party by reply email, delete the email 
from your computer system and shred any paper copies.

Note to Patients: There are a number of risks you should consider before using 
e-mail to communicate with us. See our Privacy Policy and Henry Ford My Health 
at www.henryford.com for more detailed information. If you do not believe that 
our policy gives you the privacy and security protection you need, do not send 
e-mail or Internet communications to us.

==

--
The contents of this e-mail (and any attachments) are confidential, may be 
privileged and may contain copyright material. You may only reproduce or 
distribute material if you are expressly authorized by us to do so. If you are 
not the intended recipient, any use, disclosure or copying of this email (and 
any attachments) is unauthorized. If you have received this e-mail in error, 
please notify the sender and immediately delete this e-mail and any copies of 
it from your system.
==

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Mark Zelden
On Tue, 27 May 2008 11:42:51 -0700, Edward Jaffe
[EMAIL PROTECTED] wrote:

Mark Zelden wrote:
 A properly configured shared file system in either a basic or parallel
sysplex.

 Have a look at the chapter on sharing file systems in a sysplex in the
 UNIX System Services Planning manual.   I think the doc is pretty good.
 If you have questions after that, let us know.


Exactly. There is no need whatsoever for parallel sysplex. Both zFS and
HFS use USS file sharing support, which uses only XCF signaling and a
couple data set ... no XES. Requests are function shipped to the file
system owning system, so you get better performance on the image where
the file system is mounted.


Except for read only requests (even when mounted R/W).  IIRC, those are
handled from the local system regardless of the file system owner.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Paul Gilmartin
On Tue, 27 May 2008 15:31:09 -0500, Mark Zelden wrote:

Except for read only requests (even when mounted R/W).  IIRC, those are
handled from the local system regardless of the file system owner.

But beware; IIRC, even an apparent read request will attempt to
update the time-of-access in the I-node.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Edward Jaffe

Mark Zelden wrote:

Except for read only requests (even when mounted R/W).  IIRC, those are
handled from the local system regardless of the file system owner.
  


Are you sure about that? I could be wrong, but I thought that all 
requests, when mounted R/W, were function shipped. Is there a manual, 
white paper, or similar reference that describes this processing 
authoritatively?


--
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED]
http://www.phoenixsoftware.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Mark Zelden
On Tue, 27 May 2008 13:58:19 -0700, Edward Jaffe
[EMAIL PROTECTED] wrote:

Mark Zelden wrote:
 Except for read only requests (even when mounted R/W).  IIRC, those are
 handled from the local system regardless of the file system owner.


Are you sure about that? I could be wrong, but I thought that all
requests, when mounted R/W, were function shipped. Is there a manual,
white paper, or similar reference that describes this processing
authoritatively?


Well, I did say IIRC.  :-)  So I just looked this up in the planning 
manual.  First a definitions of  sysplex aware vs. sysplex unaware:

If a PFS allows a file system to be locally accessed on all systems 
in a sysplex for a particular mode, then the PFS is sysplex-aware for that
mode. If a PFS requires that a file system be accessed through the 
remote owning system from all other systems in a sysplex for a particular
mode, then the PFS is sysplex-unaware for that mode.



And here is the part I remembered that lead to my statement:



 For example, HFS is sysplex-unaware for read-write mode, because all
non-owning systems must access 
read-write file systems through the remote owning system. The non-owning
systems are said to be sysplex clients. However, HFS is 
sysplex-aware for read-only mode, which means that each system can access
read-only file systems locally, and do not need to contact the 
owning system


Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Mark Zelden
On Tue, 27 May 2008 16:08:00 -0500, Mark Zelden [EMAIL PROTECTED]
wrote:

On Tue, 27 May 2008 13:58:19 -0700, Edward Jaffe
[EMAIL PROTECTED] wrote:

Mark Zelden wrote:
 Except for read only requests (even when mounted R/W).  IIRC, those are
 handled from the local system regardless of the file system owner.


Are you sure about that? I could be wrong, but I thought that all
requests, when mounted R/W, were function shipped. Is there a manual,
white paper, or similar reference that describes this processing
authoritatively?


Well, I did say IIRC.  :-)  So I just looked this up in the planning
manual.  First a definitions of  sysplex aware vs. sysplex unaware:

If a PFS allows a file system to be locally accessed on all systems
in a sysplex for a particular mode, then the PFS is sysplex-aware for that
mode. If a PFS requires that a file system be accessed through the
remote owning system from all other systems in a sysplex for a particular
mode, then the PFS is sysplex-unaware for that mode.



And here is the part I remembered that lead to my statement:



 For example, HFS is sysplex-unaware for read-write mode, because all
non-owning systems must access
read-write file systems through the remote owning system. The non-owning
systems are said to be sysplex clients. However, HFS is
sysplex-aware for read-only mode, which means that each system can access
read-only file systems locally, and do not need to contact the
owning system



Hit send too early...  

So it looks like it has to be mounted read only by reading the above.  
However, I thought I read somewhere else that the system was smart enough
to handle a read request locally.   Still looking...

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Ted MacNEIL
However, I thought I read somewhere else that the system was smart enough to 
handle a read request locally.

I would think local reads could cause an integrity problem.
-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: USS file sharing in z/OS

2008-05-27 Thread Edward Jaffe

Mark Zelden wrote:

So it looks like it has to be mounted read only by reading the above.
  


Right. And, just to make sure we're both on the same page, the quoted 
information says: ... all non-owning systems must access read-write 
file systems through the remote owning system and each system can 
access read-only file systems locally, and do not need to contact the 
owning system.


So, the determination of whether access, by a non-owning system, is 
local or remote is based on the R/O vs R/W status of the file system and 
*not* whether the individual request is a read or a write.



However, I thought I read somewhere else that the system was smart enough
to handle a read request locally.   Still looking...
  


To make that work, they would need to have some sort of cross-system 
buffer synchronization/invalidation, block-level owning token, 
serialization, or other integrity/consistency mechanism to keep the file 
system from returning the wrong data. They don't have such a thing for 
HFS/zFS ... yet. ;-)


--
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED]
http://www.phoenixsoftware.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html