Our approach is very similar to George's. We access the tools disk as R
rather than P. We also have an SFS directory for local System Programmer
tools. The latter does not include things we might need in an emergency;
those go on a minidisk.

We have another disk that is the common 191 for home grown service
machines. It is accessed as B/A before the driver is started. All code
written for the service machines goes on this disk. By doing this, we
know where the code is when a server has trouble.  

There are a few good reasons for having the tools and common files on
disk rather than SFS. One is that they are available even if SFS is
down. This includes the time before SFS is initialized, the time you
have SFS in dedicated maintenance mode (e.g. while increasing the size
of the catalog minidisks), or it is down for some other reason. 

And for those who say that SFS is so reliable that its being down is of
no concern, I say take off those rose colored glasses. We have had 2
major and one minor outage of SFS in the past 3 years. We were hamstrung
for 2 days when the catalog was corrupted. This was following a
datacenter migration - the disks were transferred via wire. The problem
did not show up for 3 days. When it surfaced, it showed up as a CP
abend. When we finally connected the repeated abends with SFS, we had to
take the server down and send a copy of the catalog to the support
center where a tailored zap was created that would, in essence, cause
the server to skip over the corrupted catalog entries. This was an
iterative process. After zapping the catalog, we were able to unload the
filepool, generate a filepool using new disks, and reload the data to
it. 

More recently, we have had to take the server down to increase the size
of the catalog disks (twice). When we increased the size the first time,
we underestimated how rapidly it would grow.  SFS had been in use for
several years and was fairly large at the time. We thought we were being
generous when we gave the catalog 4 times the space that it occupied
before. We were wrong. That only lasted about a year and a half.



Regards, 
Richard Schuh 

-----Original Message-----
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of George Haddad
Sent: Monday, July 09, 2007 2:02 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Philosophy on Tool Installations

Our philosophy has always been to leave the 19E/Y-disk for IBM. 3rd 
Party and homegrown "public" tools were  put on our "P-disk" (public), 
19A at MSU. A hook into SYSPROF accessed this for all users.
(Sysprog tools were on a separate disk, as are Ops-only tools) Accessing

our public disk at P meant that it was searched prior to the S or Y, so 
that we could put "cover" Execs and the like on it. Some tools (certain 
compilers and database software --- things with lots of individual 
files)  were kept on their own disks, with front-end wrappers placed put

on our P-disk to Link/Access/Execute those tools. These wrappers were 
1-liner execs which called a generic "table-lookup/link/access" tool, 
also on our P-disk.
Thus when changes we needed to the "wrappers" they could be done in a 
single executable.

Stephen Frazier wrote:
> The answer is yes. A tool that is only needed by a few would be placed

> on its own mini-disk and linked to by those who need it. A tool that 
> is used by almost everyone is placed on the Y disk so everyone has 
> access to it all the time.
>
> [EMAIL PROTECTED] wrote:
>>
>> What is the best practice for installing tools into the z/VM space?
>>
>> 1. Do you create a mini-disk for each tool and have everyone who 
>> needs it link to that disk for the time needed to use the tool?
>>
>> 2. Do you create a common mini-disk that is accessed by every user?
>>
>> (in z/OS I would put the tools into their own libraries and then put 
>> those libraries into the linklist if everyone needed access or 
>> provide information on how to steplib for those infrequent uses).
>>
>> Thanks (and please forgive the mention of z/OS)
>>
>

Reply via email to