I did maintenance the way Jim and Kris do for awhile.  Migrating to new
releases was such a pain that I now do SERVICE and PUT2PROD on the test
system first, then do them again in production.  This way, I can run
MIGRATE to get ready for new releases.  I added a 290 disk to MAINT that
has a backup copy of 190.  If I have to back out CP, I can IPL the old
nucleus (CPLOLD MODULE).  If CMS is broken, I can IPL 290.  If TCP/IP is
broken, we have a console access product (Automation Point) that we can
use to log on.  If I can't back out the offending PTF, I can restore
MAINT from VM:Backup. 

                                                       Dennis O'Brien

"A pistol! Are you expecting trouble Sir?" "No Miss, were I expecting
trouble I'd have a rifle."

-----Original Message-----
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Kris Buelens
Sent: Tuesday, July 15, 2008 01:28
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] Question on how RSU maintenance is being handled
with DIRMAINT and RACF in the picture

I work like Jim: never PUT2PROD.
Copy the runtime minidisks from the install user (eg 5VMTCP30) to
alternate addresses of the "active" user (eg TCPMAINT).  When the time
is right, the mdisk addresses are swapped.  This process is first
tested on the SW Installation system and then repeated on production
systems. The minidisks passwords tell which is which.
This way we always have a backout and operators can be explained which
minidisk addresses to swap and which servers to restart in case a
backout is required.

2008/7/15 Feller, Paul <[EMAIL PROTECTED]>:
>
>  Interesting.. Our "second" level system is actually an lpar.  I
should
> have mentioned that we did apply the RSU to that lpar first.  We went
> through the same steps on that lpar.  I was just trying to see how
other
> people handle the maintenance process with RACF and DIRMAINT.
>
> Paul Feller
> AIT Mainframe Technical Support
>
>
> -----Original Message-----
> From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
> Behalf Of Jim Bohnsack
> Sent: Monday, July 14, 2008 1:48 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: Question on how RSU maintenance is being handled with
> DIRMAINT and RACF in the picture
>
> Paul--I take a little more cautious approach to maintenance than you
do.
> I **DO NOT-EVER* *put RSU or any kind of maintenance on a production
> system or on a "test" system, that being defined as not really a
> production system but one that users can get access to in order to
test
> new or changed application code.  I put maintenance and build new
> releases on a 2nd level VM system that no one but sysprogs can get to.
> When an RSU changes something, after seeing that the change will even
> run on the 2nd level, I'll copy to effected mdisks to the 1st level
with
> a different mdisk address.  For RACF, for example, I'd copy to 2nd
level
> 305 and probably the 490 disks to the 1st level as 1305 and 1490.
> Then at the time of the change, I just readdress the old 305 and 490
> disks to something else, 2305 and 2490 perhaps, readdress 1305 and
1490
> to 305 and 490 and at some quiet time of the day, restart RACF.  What
> disks you need to play with depends on the component, but, in general,
> that's what I do.  I always leave comments in the directory entry so I
> know  the current status of the disks.  The mdisk passwords are very
> useful for that kind of thing.  READ WRITE 53VM0801 is good.
>
> Everyone will have their own ideas as to how to do maintenance, but
this
> is an approach that has worked for me or at least that I have
developed
> and followed over the course of 30 years of VM maintenance.
>
> Jim
>
> Feller, Paul wrote:
> >  We have been a z/VM shop for about three years and have applied RSU
> > maintenance during those three years.  We currently are running z/VM
> > 5.3 and are working on applying RSU0801.  This is the first time we
> > have done an RSU with DIRMAINT and RACF in the picture.  We are
> > wondering how other people handle doing maintenance with DIRMAINT
and
> RACF.
> >
> > Here is what we did.
> > 1) To get the PUT2PROD for DIRMAINT maintenance to work we shutdown
> > the DIRMAINT guest and all of the DIRSAT guest.  (we run three lpars
> > that share the user directory)
> >
> > 2) To get the PUT2PROD for RACF maintenance to work we shutdown the
> > effected lpar.  The other two lpars stayed up.  IPLed the effected
> > lpar with NOAUTOLOG and XAUTOLOG RACMAINT.  We then completed the
> PUT2PROD.
> > We then did a SHUTDOWN REIPL of the lpar and came back up normal.
> >
> >
> > Paul Feller=20
> > AIT Mainframe Technical Support
> >
> >
>
> --
> Jim Bohnsack
> Cornell University
> (607) 255-1760
> [EMAIL PROTECTED]



--
Kris Buelens,
IBM Belgium, VM customer support

Reply via email to