Ok will do. thanks
another question..
Because I'm paranoid and have no idea what type of backup the vm guy has on
this system I've decided to ddr the two cpowned volumes to a backup disk
defore adding the spool.
I have the disks attached to maint 2nd level
when i copy all maints 123 to the backup vol will that cause problems ?- i.e.
I won't end up with two sysres volumes with tha same label - right - just
checking.
-James

On 5/4/07, Kris Buelens <[EMAIL PROTECTED]> wrote:

It seems all right for the spool....

Ough, but what do I see?
   link * cf1 cf1 mw
You should never link in MW mode if you like the data on a CMS minidisk,
never, never, never (or almost never).
So, change that into
  link * cf1 cf1 m
of
  link * cf1 cf1 mr
The advantage of linking MR is that you the the mindisk in RO mode when
someone else has it RW, and then a Q LINKS vdev allows you to see the
virtual address of who prohibits the RW link:
   link maint 193 999 m
   HCPLNM105E MAINT 0193 not linked; R/W by KRIS3
   link maint 193 999 mr
   HCPLNM102E DASD 0999 forced R/O; R/W by KRIS3
   q links 999
   KRIS3    0111 R/W, KRIS     0999 R/O
   cp send cp kris3 DET 111

"never, never, never (or almost never)":  The only exception I see for an
MW link to a CMS formatted minidisk is when you are 100% certain that the
R/W linker has not ACCESSed that minidisk.  Example: the 191 minidisk of a
Linux guest that when it has not IPLed CMS.

Kris

2007/5/4, James M <[EMAIL PROTECTED]>:
>
> Now I must put all this theory to work. (my heart's thumping)
> I've been ordered to add more spool to a critical second level guest
> because of a rscs problem I reported in a new message today.
>
> Before I do it I want to double check to make sure I've got everything
> right. (It's been years since I did vm - early esa).
>
> Here's what I think i must do.....PLEASE correct me
>
> att free cuu to 2nd level vm
>
> on 2nd level maint att cuu *
> cpfmtxa
> label (i.e. spoolz)
> format
> allocate
> 0-end spol
>
> Update sysconfig--->
>
> cprel a
> link * cf1 cf1 mw
> acc cf1 z
> x system config z
>    slot 1 vol1
>    slot 2 vol2
>    slot 3 reserved --> which I will change to spoolz
> rel z(det
> link * cf1 cf1 rr
> cpacc * cf1 a sr
>
> def cpown slot 3 spoolz
>
> att cuu system spoolz
>
> on the first level system I must..
> update direct entry for 2nd level guest adding dedicate cuu
> I don't believe I need to update 1st level system config for dedicated
> disks - right?
>
> Did I miss anything - any gotchas?
>
> Thanks for any and all help
> -James
>
>
> On 4/30/07, Kris Buelens < [EMAIL PROTECTED]> wrote:
> >
> > Murphy's law dictates a small change in the order of things.
> >
> > With the order you propose there is a time window where the SYSTEM
> > CONFIG does not know the new spool volume, but CP uses it for new spool
> > files.  If CP goes down in that window, all spool files with some parts of
> > the new volume will be lost after the restart.
> >
> > The good order to be 100% safe is:
> > 1. Format & allocate (like you say)
> > 2. Update SYSTEM CONFIG
> > 3. DEFINE CPOWNED
> > 4. ATTACH to SYSTEM
> >
> > To find which spool files have parts on a given volume, get this tool
> >      http://www.vm.ibm.com/download/packages/descript.cgi?SPOOLCHN
> >
> > 2007/4/30, James M <[EMAIL PROTECTED]>:
> > >
> > > z/VM 5.2
> > >
> > > I'm backing up the vm sys prog and sure enough - bing - a problem
> > > immediately.
> > > Problem solved but I have a couple of questions.
> > >
> > > adding a new spool full volume steps - is this correct..
> > > att cuu * as vdev
> > > cpfmta vdev
> > > label vmspxx
> > > done...
> > > format
> > > done..
> > > allocate
> > > spol 0-end
> > > done...
> > > end
> > > Once cpfmtxa ends - att  cuu system vmspxx  & update cpowned list in
> > > config file.
> > > ...and I'm off and running.
> > >
> > > Followup question - is there a convenient way to migrate spool files
> > > from that volume without cold starting?
> > >
> > > Another spool followup question.
> > > If I query alloc and do the math on the spool numbers I get 80%
> > > If I q alloc spool I get 53%
> > > How come?
> > >
> > > Any rexx exec's out there that can monitor spool and send me an
> > > email if > x%?
> > >
> > > Thanks
> > > James
> > >
> >
> >
> > IBM Belgium, VM customer support

Reply via email to